

AI Safety & Trust Amid Rising Scams
AI Applied
What You'll Learn
- ✓AI-generated fake imagery and videos pose an existential threat to trust and safety, especially for the elderly who may not be able to detect them
- ✓Hallucinations in AI outputs are a concern, but the real issue is that humans need to be the quality control, not blindly trusting AI
- ✓The 'sycophancy problem' where AI constantly agrees with users is also a trust issue, as organizations need pushback and diverse perspectives
- ✓Providing the right context and up-to-date information is crucial for building trust in AI systems
- ✓Meta's efforts to detect and flag suspicious messages on WhatsApp and Messenger are a positive step, but the broader challenge of combating scams and misinformation remains daunting
Episode Chapters
Introduction
The hosts discuss the growing challenges around AI safety and trust, particularly in the context of scams and misinformation targeting vulnerable populations.
AI-Generated Fake Imagery and Videos
The hosts explore the threat of AI-generated fake imagery and videos, especially for the elderly who may not be able to detect them.
Hallucinations and Quality Control
The hosts discuss the issue of hallucinations in AI outputs and the need for humans to be the quality control, not blindly trusting AI.
The Sycophancy Problem
The hosts discuss the 'sycophancy problem' where AI constantly agrees with users, and the need for organizations to have pushback and diverse perspectives.
Providing the Right Context
The hosts emphasize the importance of providing the right context and up-to-date information for building trust in AI systems.
Meta's Efforts to Combat Scams
The hosts discuss Meta's efforts to detect and flag suspicious messages on WhatsApp and Messenger, and the broader challenge of combating scams and misinformation.
AI Summary
This episode discusses the growing challenges around AI safety and trust, particularly in the context of scams and misinformation targeting vulnerable populations like the elderly. The hosts explore issues like AI-generated fake imagery, hallucinations in AI outputs, and the need for proper context and quality control. They highlight Meta's efforts to combat scams on WhatsApp and Messenger, and the broader need for technological solutions and user education to address these emerging threats.
Key Points
- 1AI-generated fake imagery and videos pose an existential threat to trust and safety, especially for the elderly who may not be able to detect them
- 2Hallucinations in AI outputs are a concern, but the real issue is that humans need to be the quality control, not blindly trusting AI
- 3The 'sycophancy problem' where AI constantly agrees with users is also a trust issue, as organizations need pushback and diverse perspectives
- 4Providing the right context and up-to-date information is crucial for building trust in AI systems
- 5Meta's efforts to detect and flag suspicious messages on WhatsApp and Messenger are a positive step, but the broader challenge of combating scams and misinformation remains daunting
Topics Discussed
Frequently Asked Questions
What is "AI Safety & Trust Amid Rising Scams" about?
This episode discusses the growing challenges around AI safety and trust, particularly in the context of scams and misinformation targeting vulnerable populations like the elderly. The hosts explore issues like AI-generated fake imagery, hallucinations in AI outputs, and the need for proper context and quality control. They highlight Meta's efforts to combat scams on WhatsApp and Messenger, and the broader need for technological solutions and user education to address these emerging threats.
What topics are discussed in this episode?
This episode covers the following topics: AI safety, AI trust, AI scams, AI-generated content, AI hallucinations.
What is key insight #1 from this episode?
AI-generated fake imagery and videos pose an existential threat to trust and safety, especially for the elderly who may not be able to detect them
What is key insight #2 from this episode?
Hallucinations in AI outputs are a concern, but the real issue is that humans need to be the quality control, not blindly trusting AI
What is key insight #3 from this episode?
The 'sycophancy problem' where AI constantly agrees with users is also a trust issue, as organizations need pushback and diverse perspectives
What is key insight #4 from this episode?
Providing the right context and up-to-date information is crucial for building trust in AI systems
Who should listen to this episode?
This episode is recommended for anyone interested in AI safety, AI trust, AI scams, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
In this episode, we explore AI safety concerns and discuss how to build trust in the age of rising scams. We share tips and insights to protect yourself and your loved ones from AI-driven fraud and misinformation. Get the top 40+ AI Models for $20 at AI Box: https://aibox.ai Conor’s AI Course: https://www.ai-mindset.org/courses Conor’s AI Newsletter: https://www.ai-mindset.ai/ Jaeden’s AI Hustle Community: https://www.skool.com/aihustle See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Full Transcript
Today on the AI Applied Podcast, Connor and I want to talk a little bit about AI safety and trust and like scams, things you should be concerned about for your parents. A lot of these, there's a lot of different tools coming out from companies that we're excited about. And there's some areas that we are quite nervous about. So we want to make a whole episode talking about this. And even for yourself, I mean, I will say that just this week, one of my employees for this podcast network got a scam text from somebody pretending to be me. So there's all sorts of crazy stuff going on right now. But I think AI could definitely be a detriment to this industry and also possibly a solution. I'll kick it over to you, Connor. I know this is a big topic that's important to you, but what are you kind of seeing in the industry on this? Yeah, so I think trust is huge with this. And so, you know, Jaden, as you know, sort of I get to go out and work with a lot of big organizations, and that's the number one thing is they're like trust, right? They're like, how do we trust this? How do we know if we can trust it? You know, we ask questions and we get the wrong answers back. And I say I would even go a step further than that. And I want to kind of talk about that today and hear your thoughts on that too. I mean, Jaden was referencing, you know, what Meta is trying to do to protect the elderly. And as we've seen with Sora and Midjourney and, you know, Vio and all these tools that generate images and videos, you know, folks like, I don't know, we're not that, I mean, I'm, you know, older than Jaden, but, you know, Jaden's age and my age and all your ages out there, you know, you're probably at least slightly attuned to this is what, you know, this looks or sounds like. But now you kind of add in 11 labs and things like that. And I think our, you know, we have like elderly parents or I have elderly parents. They don't stand a chance. I mean, they were even getting duped by a random email from Nigerian princes and things like that. Like you think they have a chance when it looks like me and sounds like me and says like, hey, mom, just want to leave a quick video. Here I am. Oh, my gosh. There's an earthquake or I'm in trouble or something. They have no chance. And so I just want to kind of like lay out a couple of things that I talk to companies about in terms of trust. And there's three big things, because when we're talking about, you know, old people with this, what Meta's doing, we were talking about that bill that Gavin Newsom just signed about protecting kids and what, you know, OpenAI is trying to do to protect kids and all that kind of stuff. That's huge. But the three things I talk to companies about are this. So number one is this issue of hallucinations. I'm sorry, this issue of fake imagery. And, again, I won't go over it again, but just this whole idea of what are we going to do. and I was going to tell organizations, I'm like, I don't know. I think this is like an existential threat to humanity. I think ultimately we're screwed. But, I mean, each generation is going to get more and more used to it. The second thing on trust is around hallucinations. And I would say I'm not even sure hallucinations are the biggest problem, Jaden, when we're talking about trust and AI. Because when I hear people saying all the time, and I may have said this before, but I get this in legal more than almost anything else, which is, hey, our junior lawyers are giving us, you know, stuff. but now there's hallucinations in there so we can't trust it. That's why we don't use AI. What should we do? I'm like, you should fire the lawyer. That's what you should do. Because the whole point is that you have to be the quality control. So the idea of blaming AI for hallucinations to me seems absolutely ludicrous. Everything can hallucinate. You can read the wrong context. You can forget something You can misspeak You can ask somebody that doesn know what they talking about You can ask somebody who biased And then you present that information to your boss or somebody or a client or something like that that wrong You are the quality control That is your job So the fact that AI hallucination rates, by the way, which are quite low now, the fact that it still hallucinates and we're blaming the AI, this is not an automatic mail sorting machine. This is not a calculator. This is something that is a statistical word predictor and can make mistakes fewer and fewer, but you have to be the quality control. So hallucinations, number one in this category. Number two, Jaden, is the syncophancy problem, which they are trying to fix, obviously. But this is like, oh, Connor, another gem of an idea. And like Chachapiti constantly saying like, oh, my gosh, what an amazing idea that is. In fact, we need pushback. You need people to push back against you in your organization. It is great to be told that your ideas are amazing. I love it when people tell me every idea I have is amazing. But the way Jaden and I work together, when we're sort of trying to decide how to do the podcast, they'll say, hey, what about this? I'll be like, I don't think so. Or I'll say, what about we talk about this? And Jane's like, no, that's not going to work. You need somebody to push back against you. And then the last part of that trust thing is context. And, Jane, I don't know if I can announce this yet or not, but I'm going to be on Masterclass. And the thing that I was doing on Masterclass, we'll do an episode on this later, but the thing that I'm going to be doing or that I already did on Masterclass, I already filmed, it'll come out soon, is it giving you the right context, right? So if I'm saying, hey, I want to start a new HR tech company, and Claude's like, sure, Let me do some research on this. And then you look at the sources and it's pulling stuff from 2022. That's as bad as it hallucinating. So I would say on trust, it's yes, obviously like everything that Meta is talking about, all that kind of stuff. But it's also, let's go one step deeper. Hallucinations is a big problem. Sync events is a good problem. And whether it's giving you the right updated context is a big problem. So anyway, I just want to kind of pull back to the 30,000 foot view, because when we're talking about can we trust AI, can we trust these tech companies, there's a lot to unpack here. Yeah, 100%. And I think a lot of these companies are trying to make progress. Some people would argue other companies like OpenAI Store 2 are undermining all of the progress they're making. But anyways, I think it's going to be a balancing act and we should hopefully get there. I am encouraged by this move by Meta in particular. So they essentially, inside of WhatsApp and Messenger, they're going to be adding a bunch of kind of like these pop-ups when they feel like someone's about to get scammed. Maybe it's from a questionable account or it's, you know, asking for questionable information. I think some people will be concerned about that because they're like, well, why are they reading my messages to know that something, you know, is sketchy? But anyways, so anyways, and there's also different things. There's like this one pop up in particular that they're showing where it's like this chat could be suspicious. You know, scam detection found that this chat could be suspicious. You can request an AI review of your messages in this chat against signs of a scam. So I actually think that this is kind of cool if you if if an elderly person is using one of these tools and they feel like something suspicious. There's like a button where they can actually get an AI to review that chat and tell them if it seems like a scam or not. And actually, it is a pretty useful thing. You know, maybe like I think a lot of times when I was when I first introduced to like a number of scams, you kind of have like a weird feeling and you're like, is this real? Is it not? And then all of a sudden, you know, if you look into it, you can realize. And I like to think of myself as a very savvy person on the Internet And I trying to remember but like recently I saw I got a really good scam where they like basically had spoofed the email of a major company And there is even some scams lately where like emails were they literally hacked the email server of the company. So it looks like it was coming from the official company. Well, it was coming from the official company email, but it was still a scam. So like there's really tricky stuff out there. And I'm excited that we're using AI to combat that a little bit. Earlier, while you were talking, Connor I had to share um these this clips there's this there's a Facebook group called people who think AI generated photos are real I've been a member of this Facebook group for a very long time because it's hilarious thousand members in this Facebook group 200,000 members I think it's funny because basically what they'll do is they go and find um like this true stories of USA it's like an actual Facebook page that is posting stuff and you know in this photo in particular you can see there's like random hands on the bottom of the on the bottom of the um the sign that all the kids are holding and so like you could tell it's real or you could tell it's fake because of that what one thing that's interesting though is i find there's like insanely exaggerated photos on this on this feed a lot which make a lot of people think like oh yeah i could spot ai generated because like there's extra hands or the fingers look weird or like there's there's a weird thing but like more and more today with like the best versions of mid-journey like you can actually generate images that have zero weird parts to it and so you you kind of have to know that an ai generated image is fake like off of like context and vibes like there's a lot more that goes into it anyways so i'll just say like sometimes i feel like and i think facebook making a lot of these um responding to a lot of this is kind of in relation to like on these posts inside of this thread it's not just like oh i found this crazy com this photo in the wild but it's like they'll screenshot like the comments on the photo where people are literally thinking it's real like oh man that's so crazy this thing happens a lot of them are like rage bait kind of things um and i was also sharing one while you were talking which is uh of an ambulance where i'll bring this up to say like we're now getting to it's not just photos it's now really video that's coming but anyways there's this video of like an ambulance and the road collapses and then all of a sudden these gophers rebuild an entire bridge for the ambulance anyways like the videos are getting outrageous um and I don't think a lot of people actually think that these ones are real I would hope but anyways I do think it's funny and I do think it brings up a good point though there are a lot of people that do get confused by this stuff and so I think it's a it's a good play on Metis and to be building technology that can help combat that no I agree by the way that video was was amazing these gophers rebuilding the road for the ambulance because at first you're like oh this is kind of cute and then wait wait they're actually rebuilding the road yeah no I think you're right and I think about this all the time with you know aging parents I mean you know they get their bank accounts absolutely empty just by a text that sounds really real. It's like, hey, Connor, this is, you know, Apple calling or this is your bank calling or something, you know, it's, they just, I don't know how they stand a chance. But Jaden, I love this idea of the arms race. I actually, I'm actually all in favor of meta flagging messages and saying, hey, this might, and you know, the reason I like it is when you start comparing how different companies are doing this, it's actually really, really interesting. So by the way, like if you haven't checked out Jaden's AI box do this because this is exactly sort of like how this works. You can compare a lot of models side by side, 19 bucks a month. You have to check it out. It unbelievable And now you can build workflows in there It super super cool But Jaden the thing that I really love about X these days look X is a fine platform I know sometimes we think it slop but there's like a lot of great people out there like posting a lot of great stuff. But the thing that I like about X is this community-generated, I'm trying to remember what it's called. It's like community-generated warnings or something like that where it says, hey, this post needs more context. And under it, it's not that X is going to make the decision that this can't go up or something because that can lead to a lot of political bias, as we know, or just, you know, bias on behalf of the user or the platform. But what it does instead is this post needs more context. So it'll say like, oh, you know, Donald Trump said this, or Chuck Schumer said this, or whatever it is. And then underneath it'll say added context. Oh, this was actually in the context of this. So it wasn't that bad or it was so not playing the political game, but it's just good that people can come out and actually add some factual context to that. But that is user generated, which I absolutely love. And I've done this also with OpenAI too. Like when I see like a warning on a browser or something like that, and it'll say, hey, don't proceed any further. Sometimes it's because I'm on an airplane or something like that, and it's just the Wi-Fi is getting screwed up. But what I always do is I just take a photo of it for ChatGPT. I'm like, hey, is this a legit warning or is this not a legit warning? So that's the whole part. And that's why I think that, you know, these things, I'm happy to sort of like give up a little more privacy If on my parents' iPhone messages, I'm praying that Apple gets in on this game where they're like, hey, flag this. And I would love to see an arms race where they are going to sort of say, you know, as the scammers are getting better, the tech is getting better, the devices are getting better, whatever it is. And they're able to say like, hey, this might be a scam. Check. You know, I recommend that you call your son, Connor, your son, Jaden, and just check if this was really him. I love that idea. So I actually really applaud Meta for this approach. Yeah, I think it's a great move on their half. So I'm excited to see them kind of double down and work on this and hopefully see other players in the industry get into it as well. Because especially when, you know, we're concerned about deep fakes with all the AI video generation tools, there's just a lot out there that can make these scams look more and more legitimate, including, you know, like voice clones. It is very easy to clone someone's voice nowadays. And, you know, people can use voice changer technology, make a call. It sounds just like Connor's talking to you, but it's someone completely different asking you for money or whatever. So if Connor calls you from the gas station asking you for a $20 Amazon gift card probably isn't him. It's all the time. Get ready. There's enough content on this podcast of us talking that we're both doomed in that regard. But yeah, we won't be asking you guys for any gift cards if that is of any consolation. All right. Thank you so much, everyone, for tuning into this podcast. We will love a rating and review if you got anything out of this podcast, if you learned anything. And make sure, as always, that you go check out the link in the description to Connor's AI Mindset course. This is an incredible way, not just for yourself, but I think it's really important for entire departments or entire organizations to have everyone take this course to learn the frameworks of how you actually approach AI and how to use it to make sure that you're getting the most out of it in your business. I think this is a critical time for AI, and this is one of the best ways to make sure that you're not getting left behind in the wave. All right, thank you so much, everyone. We'll catch you in the next episode and we hope you all have an amazing rest of your day.
Related Episodes

Disney's Billion-Dollar Bet on OpenAI
AI Applied
12m

⚡️Jailbreaking AGI: Pliny the Liberator & John V on Red Teaming, BT6, and the Future of AI Security
Latent Space

Exploring GPT 5.2: The Future of AI and Knowledge Work
AI Applied
12m

AI Showdown: OpenAI vs. Google Gemini
AI Applied
14m

Unlocking the Power of Google AI: Gemini & Workspace Studio
AI Applied
12m

AI in 2025: From Agents to Factories - Ep. 282
The AI Podcast (NVIDIA)
29m
No comments yet
Be the first to comment