

AI Has a PR Problem
The AI Daily Brief
What You'll Learn
- ✓Rejection of AI outweighs enthusiasm in developed Western countries, with the US having the biggest gap (49% reject vs 17% embrace)
- ✓Developing economies like China and Brazil show much higher enthusiasm for AI (54% embrace in China)
- ✓Fears of job displacement and lack of transparency from businesses are fueling distrust in AI
- ✓Increased training, job security assurances, and income safety nets can boost enthusiasm for AI
- ✓Peer-to-peer trust is higher than trust in leaders or researchers when it comes to AI
- ✓The AI industry needs to 'earn social permission' to consume more energy as the technology grows
Episode Chapters
Introduction
Overview of the episode's focus on AI's PR problem and the Edelman Trust Barometer survey
The Edelman Trust Barometer Findings
Detailed analysis of the survey results showing the East-West divide in AI enthusiasm and the factors driving distrust
Addressing the AI PR Problem
Suggestions for how the AI industry can improve transparency, communication, and build trust with the public
The Need for 'Social Permission' for AI
Discussion of Satya Nadella's comments on the AI industry needing to earn the public's acceptance for its growing energy consumption
AI Summary
This episode discusses the growing distrust and skepticism towards AI, particularly in developed Western countries like the US, UK, and Germany. The Edelman Trust Barometer survey found that rejection of AI outweighs enthusiasm, with Americans being more than twice as likely to reject AI than embrace it. The episode explores the potential reasons behind this, including fears of job displacement, lack of transparency from businesses, and the perception that AI is being 'forced' upon people. The host suggests that addressing this PR problem will require honest communication, employee training, and building trust through peer-to-peer engagement rather than relying on leaders or researchers.
Key Points
- 1Rejection of AI outweighs enthusiasm in developed Western countries, with the US having the biggest gap (49% reject vs 17% embrace)
- 2Developing economies like China and Brazil show much higher enthusiasm for AI (54% embrace in China)
- 3Fears of job displacement and lack of transparency from businesses are fueling distrust in AI
- 4Increased training, job security assurances, and income safety nets can boost enthusiasm for AI
- 5Peer-to-peer trust is higher than trust in leaders or researchers when it comes to AI
- 6The AI industry needs to 'earn social permission' to consume more energy as the technology grows
Topics Discussed
Frequently Asked Questions
What is "AI Has a PR Problem " about?
This episode discusses the growing distrust and skepticism towards AI, particularly in developed Western countries like the US, UK, and Germany. The Edelman Trust Barometer survey found that rejection of AI outweighs enthusiasm, with Americans being more than twice as likely to reject AI than embrace it. The episode explores the potential reasons behind this, including fears of job displacement, lack of transparency from businesses, and the perception that AI is being 'forced' upon people. The host suggests that addressing this PR problem will require honest communication, employee training, and building trust through peer-to-peer engagement rather than relying on leaders or researchers.
What topics are discussed in this episode?
This episode covers the following topics: AI adoption and public perception, AI trust and transparency, AI and job displacement, AI energy consumption and social impact.
What is key insight #1 from this episode?
Rejection of AI outweighs enthusiasm in developed Western countries, with the US having the biggest gap (49% reject vs 17% embrace)
What is key insight #2 from this episode?
Developing economies like China and Brazil show much higher enthusiasm for AI (54% embrace in China)
What is key insight #3 from this episode?
Fears of job displacement and lack of transparency from businesses are fueling distrust in AI
What is key insight #4 from this episode?
Increased training, job security assurances, and income safety nets can boost enthusiasm for AI
Who should listen to this episode?
This episode is recommended for anyone interested in AI adoption and public perception, AI trust and transparency, AI and job displacement, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
<p>Today’s episode explores why public distrust in AI is accelerating, from Edelman data showing sharp divides across income, age, and geography to a broader mix of tech fatigue, social-media backlash, political posturing, and economic anxiety that’s shaping perception more than direct experience with the tools; it also looks at how concerns around job cuts, energy use, and unclear corporate motives amplify the narrative, and what early signals suggest might actually rebuild trust, including real training, clearer intent from leaders, and more concrete examples of the future AI could improve.</p><p><br></p><p><strong>Brought to you by:</strong></p><p>KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. <a href="https://www.kpmg.us/AIpodcasts">https://www.kpmg.us/AIpodcasts</a></p><p>Rovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - <a href="https://rovo.com/">https://rovo.com/</a></p><p>AssemblyAI - The best way to build Voice AI apps - <a href="https://www.assemblyai.com/brief">https://www.assemblyai.com/brief</a></p><p>LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/</p><p>Blitzy.com - Go to <a href="https://blitzy.com/">https://blitzy.com/</a> to build enterprise software in days, not months </p><p>Robots & Pencils - Cloud-native AI solutions that power results <a href="https://robotsandpencils.com/">https://robotsandpencils.com/</a></p><p>The Agent Readiness Audit from Superintelligent - Go to <a href="https://besuper.ai/ ">https://besuper.ai/ </a>to request your company's agent readiness score.</p><p>The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614</p><p><strong>Interested in sponsoring the show? </strong>sponsors@aidailybrief.ai</p><p><br></p>
Full Transcript
Today on the AI Daily Brief, why AI has a PR problem. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. All right, friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG, Blitzy, Rovo, and Robots and Pencils. To get an ad-free version of the show, go to patreon.com slash AI Daily Brief, or you can subscribe on Apple Podcasts. And if you are interested in sponsoring the show, send us a note at sponsors at AIDailyBrief.ai. Welcome back to the AI Daily Brief. Today, we are talking about a subject which I'm sure will be not a surprise at all to any of you. We are talking about AI's PR problem. There are a number of things going on and stories from the last couple of weeks, which all point in a very similar direction. So today, we're going to talk about those stories, what I think is at the root of these challenges, and some very first nascent thoughts on what we can do about it. Let's talk first about this Edelman study. Coursera founder Andrew Ng wrote, Separate reports by the publicity firm Edelman and Pew Research show that Americans, and more broadly large parts of Europe and the Western world, do not trust AI and are not excited about it. Despite the AI community's optimism about the tremendous benefits AI will bring, we should take this seriously and not dismiss it. The public's concerns about AI can be a significant drag on progress, and we can do a lot to address them. According to Edelman's survey, in the U.S., 49% of people reject the growing use of AI and 17% embrace it. In China, 10% reject it and 54% embrace it. Pew's data also shows many other nations much more enthusiastic than the U.S. about AI adoption. Positive sentiment towards AI is a huge national advantage. On the other hand, widespread distrust of AI means individuals will be slow to adopt it, valuable projects that need societal support will be stymied, and populist anger against AI raises the risk that laws will be passed that hamper AI development. So let's talk about this trust study. Edelman publishes their trust barometer every year. And this year, surprise, surprise, the big theme was AI and kind of frankly, AI consternation of the type that Andrew was just talking about. This survey was conducted very recently, October 17th to the 27th. And boy, is this all telling a single story. Edelman's headlines include one, that globally rejection for AI outweighs enthusiasm with US respondents more than twice as likely to say they reject the growing use of AI than they are to embrace it. Even beyond AI, enthusiasm for innovation is not guaranteed. Trust in AI generally lags behind trust in technology. Now, at the risk of being overly reductive, there is a very clear East-West divide here. Although, frankly, probably a better way to put it would be developed economies versus developing economies. The survey interviewed people in five countries, Brazil, China, Germany, UK, and the US, with at least 1,000 interview respondents per country. In Germany, the UK, and the US, a significantly higher number of people said they rejected AI versus embracing AI. In Germany, it was 42 to 16. In the UK, it was 46 to 18. And in the US, we had the biggest gap, a 32-point difference, with 49% of people saying they rejected AI and only 17% saying they embraced it. In Brazil and China, it was the opposite. Brazil had 24% saying they rejected AI versus 35% embracing it, and China had 10% rejecting it and a full 54% embracing it. Edelman found a big income divide, with people who were lower and middle income being more likely to say that AI would leave people like them behind than those in the top 25%. Although in the US, the numbers were high across the board, with even high-income folks seeing 47% say that they fear that AI would leave people like them behind. This fear of getting left behind is, I think, one of the key issues that we're going to have to contend with. Unsurprisingly, young people have more trust than AI, but US young people are still distrustful, with only 4 in 10 US young people trusting AI. And folks in Germany, the UK and the US are very skeptical that AI is going to help any sort of issues from climate change to work life to mental health to political polarization to poverty. In one bit of good news, there is a correlation between people being more informed about AI and having higher enthusiasm, meaning in other words, that the more we have people engage with it, the more we might have a more productive conversation. This reminded me of a report that I saw earlier, as tweeted by Business Insider reporter Brian Metzger. Senator Josh Hawley, one of the biggest AI critics in the Senate, told me this morning that he recently decided to try out ChatGPT. He said he asked a very nerdy historical question about the Puritans in the 1630s. I will say, Hawley said, it returned a lot of good information. Another bit of, I guess, good news, although sort of bad news, but maybe good news, is that a lot of the problems with AI are perception problems rather than things that people have actually experienced. For example, among those who reject the growing use of AI, still only 18% said that they personally had had bad experiences with generative AI versus 70% who said they had not. In general, when it came to why people weren't willing to use the tools, motivation and access and intimidation, while prevalent, were less common than general trust issues. Unsurprisingly, the more that people have used AI, the more likely they are to report benefits in things like my speed at getting things done and my understanding of complex ideas and concepts. Again, indicating that if we can get people to use these tools, it may change their perceptions of them. Another thing that becomes clear with this study, though, is that it's not just AI generally, but also the way that companies and people are interacting with AI that is causing issues. When asked which potential impact of generative AI on society is more likely, that business leaders are fully honest about job cuts or that business leaders aren't fully honest with employees about job cuts, unsurprisingly, 7 in 10 folks in the US said that business leaders aren't being fully honest with employees about job cuts, which certainly is feeding into the anti-AI narratives. And something that I absolutely berate companies that pay me to come talk to them about. When people were asked what would increase their enthusiasm for using generative AI in work and life, two answers relating to employers scored highly. 57 of U respondents said that their enthusiasm would be increased if they were getting high training through their employer about how to use AI effectively And 59 said that their enthusiasm would increase if they felt sure their employer was using AI to increase productivity versus eliminate jobs One of the things that I talk about all the time with any company who will listen is that you have to have an open and honest conversation with your employees about how your leadership is thinking about AI. That does not mean that you have to pretend that there's no situation in which changes in the technology landscape aren't going to impact certain roles and jobs. But to the extent that your company views AI as an opportunity creation technology, not just an efficiency and cost-cutting technology, the more you can do to articulate that and be real with it, the better off you're going to be and the more employee buy-in you're going to have. They also found that long-term job security boosts likelihood to embrace AI. Among those who said that their job security due to AI was increasing, 50% said that they embraced AI as opposed to just 21% who said their job security due to AI is decreasing. And by the way, for those who think that this is a partisan issue, it is actually wildly nonpartisan. Going back to that question of what would increase enthusiasm for using generative AI, among workforce priorities, we heard about that high-quality training, but they also asked about the idea of employers being required to retrain or redeploy employees that were displaced by Gen.ai, and very similar percentages of left, center, and right folks said that those things would increase their enthusiasm for Gen.ai. On the training question, 60% of the left said it would increase their enthusiasm, 61% of the center, and 67% of the right. On the retraining requirement, you might think that the right, who historically have antipathy towards markets being forced to do anything, would be the lowest, but they're actually the highest, once again at 60%, as compared to the left's 59% and the center's 54%. But surely when it comes to government priorities like safety nets, we're going to see more of a divide, right? Not according to this study. When asked if an income safety net for those who lost their jobs to Gen AI would increase their enthusiasm, the center was the lowest at 57%, then the right at 59%, and then the left at 63%, all very similar numbers. And around government programs supporting the use of Gen AI, once again, the right was highest at 60%, increasing their enthusiasm, as opposed to 57% for the left and 54 for the center. Getting at part of why I think some people are having a hard time with just the barrage of AI in every part of their life, Edelman concludes that people who distrust AI are more likely to say that AI is imposed on them. In the US, 48% of people who trust AI said that they feel that generative AI is being forced upon them whether they want it or not. And that jumps to 67% when it comes to those who distrust AI. Now, as we move into the remedy section, it's clear that the pathway to changing this is not going to be through business leaders or government leaders or probably even AI researchers. Instead, it's going to have to come from our peers. When asked how much they trust different groups to tell the truth about generative AI, in the US, government leaders came in even lower than CEOs, 24% compared to 27%. AI researchers were at 53%, still significantly lower than friends and family who were at 71%. All in all, this is a pretty bleak story about the state of AI perception in the US and other similarly developed countries. Now, as I mentioned, this is not the only story I've seen like this that kind of falls along these themes. Microsoft's Satya Nadella recently talked about AI needing social permission to consume as much energy as it did. In an hour-long interview with the CEO and chair of Axel Springer, Satya Nadella said, at the end of the day, I think that this industry to which I belong, needs to earn the social permission to consume energy because we're doing good in the world. Now, Nadella made a point to downplay the immediate impact of AI on power consumption, which I agree with. I think that's an overblown narrative in the immediate term, but did also note that the rapid growth of data centers is putting, and of course will put, a lot of pressure on the electric grid. Nadella argued that the only way the public will accept that pressure is if it results in economic growth that is broad spread in the economy. Now, as a total aside, one of the catastrophic failures in my estimation of the AI industry so far is particularly around the folks who are building out AI data centers. This is one of the more unique opportunities that any technology has ever had before to pair the destruction in creative destruction with creation right from the beginning. Normally, those are two sequenced phases with the destruction happening first and the creation only happening much later, at least when it comes to jobs and displacement. In this case, the infrastructure build-out should be a boon and a bonanza for the places where that infrastructure build-out is happening. It's an opportunity to employ local people to do retraining, to subsidize costs for communities. It is a failure of imagination, of policy, of planning, of basically everything you can imagine, that instead of communities competing to have this infrastructure built there, they are instead protesting. If you are among my listenership, and I know some of you are out there who are in data center construction companies or the surrounding industry, you have so much more work to do and a unique opportunity to help us right this ship right from the beginning. Now, as I mentioned a couple other times recently, I'm also seeing the anti-AI political discourse ratchet up heading into next year's midterms. Bernie Sanders recently published an op-ed in The Guardian called AI poses unprecedented threats. Congress must act now. Sanders has become completely hint and pilled and is no longer just talking about job displacement, but the quote, very real fear that in the not so distant future, a super intelligent AI could replace humans in controlling the planet. X-Risk is back on the menu, baby. Now, this op-ed reads like a blueprint for how the anti-AI rhetoric, at least from the left, is going to go next year. It's got a big dose of billionaire blame. It connects Trump in the current White House with big tech. It talks about the impacts of AI on democracy. And of course, it talks about job displacement quoting folks like Elon Musk and Anthropics Dario Amadei As I was preparing this episode I noticed that Florida Governor Ron DeSantis who has also been getting increasingly loud in his AI and general tech antagonism is putting together a proposal for what he calling a citizen bill of rights for AI Now, one thing I will note is that even among folks who are firmly bullish AI, there are a fair number of things in this idea for a bill of rights that don't feel like they would be all that controversial across the spectrum from AI bulls to bears. prohibiting AI from using people's name and likeness without their consent, requiring notices when consumers are interacting with AI, prohibiting companies from selling or sharing personally identifying information. Like I said, a lot of things that I think a lot of people could get together on. Now, there is also in this a big whack against AI data centers, such as prohibiting utilities from charging residents more to support data center development. But even with that, I still think that there's probably more agreement than you might imagine. Now, I don't want to get fully into it today, but even mainstream media is noticing how this is becoming a bipartisan issue. NBC News recently pointed out that AI is creating odd bedfellows across parties. Now season two is coming and we're back with even bigger conversations. This show is entirely focused on what it's like to actually drive AI change inside your enterprise. And as case studies, expert panels, and a lot more practical goodness that I hope will be extremely valuable for you as the listener. Search You Can with AI on Apple, Spotify, or YouTube and subscribe today. This episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise-scale code bases with millions of lines of code. Enterprise engineering leaders start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and precompiles code for each task. Blitzy delivers 80% plus of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Public companies are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding pilot of choice to bring an AI-native SDLC into their org. Visit Blitzy.com and press Get a Demo to learn how Blitzy transforms your SDLC from AI-assisted to AI-native. Meet Rovo, your AI-powered teammate. Rovo unleashes the potential of your team with AI-powered search, chat, and agents, or build your own agent with Studio. Robo is powered by your organization's knowledge and lives on Atlassian's trusted and secure platform, so it's always working in the context of your work. Connect Robo to your favorite SaaS app so no knowledge gets left behind. Robo runs on the Teamwork Graph, Atlassian's intelligence layer that unifies data across all of your apps and delivers personalized AI insights from day one. Robo is already built into Jira, Confluence and Jira Service Management Standard, Premium and Enterprise subscriptions. Know the feeling when AI turns from tool to teammate? If you Rovo, you know. Discover Rovo, your new AI teammate powered by Atlassian. Get started at ROV, as in victory, O.com. Small, nimble teams beat bloated consulting every time. Robots and Pencils partners with organizations on intelligent, cloud-native systems powered by AI. They cover human needs, design AI solutions, and cut through complexity to deliver meaningful impact without the layers of bureaucracy. As an AWS-certified partner, Robots & Pencils combines the reach of a large firm with the focus of a trusted partner. With teams across the U.S., Canada, Europe, and Latin America, clients gain local expertise and global scale. As AI evolves, they ensure you keep peace with change. And that means faster results, measurable outcomes, and a partnership built to last. The right partner makes progress inevitable. Partner with Robots & Pencils at robotsandpencils.com slash AI Daily Brief. Now, ultimately, I think that the reasons for AI's PR problem are in some cases stuff that is actually about AI, but a lot of cases is about other things as well. Please do not hold me to account for a comprehensive list here. As I was prepping this, these were just some of the things that came off the top of my head. So reasons for AI's PR problem, category one, stuff that is actually about AI. I think that while there are a lot of perspectives on this, and even if you don't agree, there are many people for whom the copyright and art issues are real. This is one of those reasonable people can disagree type of issues, but reasonable people disagreeing means understanding that the people who disagree with you, if you don't think that copyright issues are a big one, are allowed to feel the way they feel. This is relative to all these others, perhaps a smaller issue so far, but I certainly don't think it helps when prominent people in interviews can't commit to wanting the future to be for humans, not machines. But I think in fact, a lot of the problems for AI's PR problem are for reasons outside of AI. So in category two, I put stuff that's about tech, but not AI per se. I think in general, we are at peak tech bro antipathy. There is a perceived arrogance and elitism that has developed over the last decade or so that is coming home to roost in AI in a major way. Even bigger than that, though, I think we are finally going through a social media reckoning. I believe that many people's assessment of whether we are all better off for social media existing has come to the conclusion that we are not, but now it's too deeply entrenched to do anything about, and so people are looking to the next thing, i.e. AI, and asking how we might not sleepwalk into it. I also think that even a lot of people's problems with AI are actually problems with a social media infrastructure that is designed to capture and monetize attention to the exclusion of any other benefit for people Think about the response to OpenAI Sora The problem people had wasn with the launch of the model Sora 2 it was with OpenAI releasing a Sora app alongside it which they felt like rightly or wrongly was OpenAI just playing the same old attention capture game. In that way, then, AI is exacerbating what they already don't like about social media. A third category for AI's PR problem, and a very big category, in my belief at least, is stuff about the world at large. We are in a moment where for many people, they feel as though the economy is not working for them or for people like them. We increasingly live in this K-shaped economy where people who have assets are doing great and people who don't have assets are not. It is producing all sorts of challenges, such as the gambling-ification of everything. But to have AI come into that environment makes people feel like it's going to make things worse, not better for them. Of course, alongside that is the easy political tactic right now employed by both the right and the left of billionaire blame, which, by the way, I don't mean to be dismissive of. I'm just pointing out that it is an extremely popular political point right now. And Category 4, which is sort of a subsection of Category 3 in some ways, is fear of the future, or maybe better put, fear of an unknowable future. There is anxiety around job displacement. And of course, even beyond just our individual jobs, we live in a period of extreme volatility across so many areas. The so-called fourth turning, depending on whether you look at generational theory. Take all these things together, and it is just an absolute recipe for AI antipathy. And this AI PR problem, that is the subject of the show. Now, I should say here, and I hope it goes without saying, but just to be clear, I am obviously coming at this from a very specifically American perspective. And even if some amount of this is applicable broadly, certainly my anchor is the experience that I have as an American in American conversations. And so I don't want you to think that I'm trying to apply this to everywhere around the world. It may be that even in other countries like the UK and Germany, the specifics here are very different. Now, I do not think that these problems are insurmountable. I think Andrew's post that kicked this off has a lot of good thoughts here. He writes, to win people's trust, we have a lot of work ahead to make sure AI broadly benefits everyone. Higher productivity is often viewed by general audiences as a code word for my boss will make more money or worse, layoffs. Second, he says we have to be genuinely worthy of trust. This means every one of us has to avoid hyping things up or fear-mongering, despite the occasional temptation to do so for publicity or to lobby governments to pass laws that stymied competing products. Four wildly oversimplified areas that feel like they have to be part of the solution are, one, by far the most important, I think, in some ways, engaging with these concerns, actually listening to people who have them, and having real conversations. We cannot afford, as an industry and as a group with a particular vision of the future of society, to be outright dismissive of the concerns that people have here. Even when we disagree, when we dismiss those concerns, we undermine our ability to address them. Now, that does not mean, of course, having to give the time of day to people who are just trying to use their anti-AI position as a bully pulpit, nor does it mean we have to treat every concern equally in terms of legitimating them. But I think broadly speaking, letting people actually be heard when they're saying they have concerns about this stuff is a good starting point. Next up, one of the areas of optimism you heard me say, is that I actually think that there is a lot of policy common ground here that is going to allow us to build a shared base that isn't just regulate on the one hand or don't regulate on the other. Now, of course, those two extremes, over-regulation and no regulation, are going to be extremely loud, especially in this political cycle, and they're going to spend a lot of money. I say, fine, let them. The rest of us can come together and figure out how we slowly stack on common wins that build a shared foundation for a future we can agree on. Speaking of a future, We got to spend more time painting a vision of the positive future that AI can create. We spend so much time talking sort of blithely about the power of these models and what they're going to do and what they could do and how much work they can replace. And very little time talking about what that's going to mean. When we say that AI is going to create new opportunities that weren't possible before, that it's going to unlock new industries, what does that actually look like? And of course, that can't just be talk. We have to actually invest in that future. And for real, not just give it lip service. And this is one where a lot of you listeners, especially those of you who are in leadership positions in your company, have an outsized role to play. We are catastrophically behind right now in AI training, upskilling, and engagement. We are stuck in silly prompting courses and two-year-old mindsets and not even doing enough with that. Companies have the opportunity to be a major determinant in whether AI is perceived as good or bad. And in my goodness, would I rather have all of us be part of that positive future rather than the negative one. Look, my objective with this type of episode is not at all to be preachy, nor is it to suggest that I have perfect knowledge about why all these issues are the way that they are or what we should do about them. I just know that especially heading into a contentious election year, there's simply no way we're getting out of having these tough conversations. One source of optimism I have comes from the younger generations. They don't really perceive there being an option around this AI conversation. For young Gen Zs and especially Gen Alphas who are in high school now, AI is so so obviously going to be a huge part of their future, that it doesn't matter if they hate it. Doesn't matter if they don't like it. Doesn't matter if they wish that it didn't exist. It does. And so they're going to figure out how to use it. I think the more we can engage with the reality of AI as a force that is and will shape society, the more space we're going to have to make sure it benefits everyone. For now, that is going to do it for this heady episode of the AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace. Thank you.
Related Episodes

4 Reasons to Use GPT Image 1.5 Over Nano Banana Pro
The AI Daily Brief
25m

The Most Important AI Lesson Businesses Learned in 2025
The AI Daily Brief
21m

Will This OpenAI Update Make AI Agents Work Better?
The AI Daily Brief
22m

The Architects of AI That TIME Missed
The AI Daily Brief
19m

Why AI Advantage Compounds
The AI Daily Brief
22m

GPT-5.2 is Here
The AI Daily Brief
24m
No comments yet
Be the first to comment