
Spotting AI Writing: Insights from Wikipedia's Guide
AI Applied • AI Applied

Spotting AI Writing: Insights from Wikipedia's Guide
AI Applied
What You'll Learn
- ✓Wikipedia has created a comprehensive guide to detecting signs of AI-generated writing, which has become necessary due to the increasing sophistication of language models.
- ✓One key indicator is the tendency of AI to overemphasize the importance and notability of even mundane subjects through excessive citations and claims of significance.
- ✓AI-generated writing often includes 'tailing clauses' that use present participles to hype up the importance of the topic being discussed.
- ✓While AI tools can be helpful in the writing process, the hosts recommend using them as a supplement to human creativity, editing, and voice, rather than relying on them exclusively.
- ✓Spotting AI writing requires developing an intuitive sense for the patterns and quirks of AI-generated text, which becomes easier with practice.
Episode Chapters
Introduction
The hosts discuss a TechCrunch article highlighting Wikipedia's guide to spotting AI-generated writing.
Wikipedia's Guide to Detecting AI Writing
The hosts dive into the specific techniques and indicators outlined in Wikipedia's guide, such as overemphasis on notability and the use of 'tailing clauses'.
Using AI as a Supplement, Not a Replacement
The hosts emphasize the importance of using AI tools to enhance human creativity and writing, rather than outsourcing content creation entirely to AI.
AI Summary
This episode discusses how to spot AI-generated writing, drawing insights from Wikipedia's guide on the topic. The hosts explore various techniques used by AI language models that can give away their artificial nature, such as overemphasizing the importance of mundane subjects and citing excessive sources to prove notability. They emphasize the importance of using AI tools as a supplement to human creativity and editing, rather than outsourcing content creation entirely to AI.
Key Points
- 1Wikipedia has created a comprehensive guide to detecting signs of AI-generated writing, which has become necessary due to the increasing sophistication of language models.
- 2One key indicator is the tendency of AI to overemphasize the importance and notability of even mundane subjects through excessive citations and claims of significance.
- 3AI-generated writing often includes 'tailing clauses' that use present participles to hype up the importance of the topic being discussed.
- 4While AI tools can be helpful in the writing process, the hosts recommend using them as a supplement to human creativity, editing, and voice, rather than relying on them exclusively.
- 5Spotting AI writing requires developing an intuitive sense for the patterns and quirks of AI-generated text, which becomes easier with practice.
Topics Discussed
Frequently Asked Questions
What is "Spotting AI Writing: Insights from Wikipedia's Guide" about?
This episode discusses how to spot AI-generated writing, drawing insights from Wikipedia's guide on the topic. The hosts explore various techniques used by AI language models that can give away their artificial nature, such as overemphasizing the importance of mundane subjects and citing excessive sources to prove notability. They emphasize the importance of using AI tools as a supplement to human creativity and editing, rather than outsourcing content creation entirely to AI.
What topics are discussed in this episode?
This episode covers the following topics: AI writing detection, Language model biases, AI-human collaboration in content creation.
What is key insight #1 from this episode?
Wikipedia has created a comprehensive guide to detecting signs of AI-generated writing, which has become necessary due to the increasing sophistication of language models.
What is key insight #2 from this episode?
One key indicator is the tendency of AI to overemphasize the importance and notability of even mundane subjects through excessive citations and claims of significance.
What is key insight #3 from this episode?
AI-generated writing often includes 'tailing clauses' that use present participles to hype up the importance of the topic being discussed.
What is key insight #4 from this episode?
While AI tools can be helpful in the writing process, the hosts recommend using them as a supplement to human creativity, editing, and voice, rather than relying on them exclusively.
Who should listen to this episode?
This episode is recommended for anyone interested in AI writing detection, Language model biases, AI-human collaboration in content creation, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Join hosts Conor Grennan and Jaeden Schafer as they explore Wikipedia's guide to detecting AI writing. Discover the nuances of AI-generated text and learn how to identify it with expert insights and practical tips. Tune in for an enlightening discussionGet the top 40+ AI Models for $20 at AI Box: https://aibox.ai Conor’s AI Course: https://www.ai-mindset.ai/courses Conor’s AI Newsletter: https://www.ai-mindset.ai/ Jaeden’sAIHustleCommunity:https://www.skool.com/aihustle See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Full Transcript
All right, Jaden, you are showing me this article that I said we have to share this on the podcast, which is how you spot AI writing. So this is an article in TechCrunch and says the best guide to spotting AI writing comes from Wikipedia by Russell Brandom. Nice job by that guy. Jaden, this is such an interesting thing because – and let me just like read the opening part of this article because I think everybody is actually really going to love this in our audience. And we've all felt the creeping suspicion that something we're reading was written by a large language model, but it's difficult to pin down. For a few months last year, I became convinced it was like Delve or underscore could give it away. But the evidence is thin. Models have grown more sophisticated. As it turns out, folks at Wikipedia have gotten pretty good at flagging AI pros. And the group's public guide to, quote, unquote, signs of AI writing, and they give you a link to it, is the best resource I've found. So, Jaden, I'm obsessed with this because there are people who know AI and use a lot of AI can spot it immediately. You can spot it. I can spot it on the comment. I'm like, that's AI. That's not AI. But it's a fine line because it's not just one thing, especially now that Sam Altman announced that you can tell ChatGPT not to use MDashes anymore, which was a huge announcement. And every model is a little different, right? So I was sharing the other day that for the first time, Chachapiti 5.1 beat out Claude for me in comparing two models in terms of natural sounding writing. And by the way, Jaden, I just gave myself a great opening to plug AIbox.ai, which is Jaden's company. This allows you to actually use different models right next to each other. It's $19 a month, which is so unbelievably cheap. It cracks me up because and all of a sudden you have access to all these models. You do what I do every day, but I'm spending about $400 a month going back and forth and toggling not only that AI box that AI. You also can create your own workflows. You can become an agent builder with absolutely no code like me. It's absolutely phenomenal. Check it out. Jaden, okay, you showed me this Wikipedia article. Tell me more about it. Okay, I think this is amazing. If you actually go and click on the link in that article, there is a Wikipedia page. Well, it's not really like a page that you typically see on Wikipedia. It's for Wikipedia editors. And it's basically this like battle tested field guide to detecting AI. They've been so inundated since Chai Chippity came out with people writing AI pages and articles and adding and making edits with AI. They need to detect if it's AI to see if there's hallucinations, to see if it's accurate, to see if they need to fact check it. And so they have this whole page called Signs of AI Writing. and it has the most hilarious, but also legitimate like ways to spot AI. And it's kind of interesting because one of the big things they've outlined here that I think is really accurate, and we learned this very early on, is there is no automated tool that can check for AI. ChatGPT had a tool, OpenAI had a tool early on that was like an AI detection tool, but it had so many false positives, it didn't work. It wasn't perfect. They actually discontinued the whole program and deleted it. They're like, okay, we're done with this. And since then, it's been up to like, us to try to figure out if something is AI generated. And boy, oh boy, has Wikipedia come up with some really amazing really amazing ways to do this They create by the way they call this whole thing Project AI Cleanup It been going since 2023 And they have apparently millions of edits coming in every single day. And they're trying to figure out which ones are real. Okay. One thing that I think is really interesting that, and I'll try to explain this. Connor's actually the writing guru here. So he's going to probably be correcting all of my English literature terminology. So we're leaving it up to the resident expert on that, but I'll try to explain this best I can. One of the things that this guide flags in particular is about tailing clauses that have like very loose claims of importance. So basically what that means is an AI model might say something about an event, and it's always going to emphasize how important that event is. So it's always going to say, you know, like, like this is, this event has a massive impact for the entire industry. And everyone is, you know, it thinks this is so important when in reality, it's just like, it happens to be the event that you're talking about, but it's just like, you know, mildly interesting or mildly related, but everything is like, you know, has massive impact, like very important, like super consequential. So anyways, it kind of like, whatever you're talking about, it will hype up that particular thing, even if it isn't very important. And what grammar nerds apparently are going to know that this is called the present participle. Anyways, it's hard to pin down exactly. But once you recognize it, you will see this everywhere inside of AI writing. And yeah, I thought this was the first one I saw, which was, I thought, pretty, pretty spot on. It's it's amazing. I'm obsessed with this Wikipedia page, which is, again, Wikipedia signs of AI writing. And again, all credits to TechCrunch that found this. But yeah, just because I think it's so interesting to kind of go over sort of like the note, you know it when you see it kind of thing. It's like, so they try to put a framework around how to spot it. So, you know, Jane, it was like what you were talking about, so what they say specifically here is like, LLMs may include these statements, which I'll say in a second, for even the most mundane of subjects like, you know, etymology or population data. Sometimes they add hedging preambles, acknowledging the subject is relatively unimportant before talking about its importance anyway, which is hilarious. It'll be like, so here's the example. The Statistical Institute of Catalonia was officially established in 1989 and then marketing a pivotal moment in the evolution of regional statistics. It's like, that's really not a pivotal moment. And or it's like the founding of, oh, my gosh, I don't know how to say it. Ediscat, Ediscat, whatever, representing a significant shift was part of a broader movement. So let me just, let me do something fun here for a second. I'm just going to read these little clauses in things so people sort of like start to get a sense. And I know that our audience will probably be nodding along of this. So out of context, I'm just going to list a bunch. Marketed pivotal moment was part of a broader movement, an important center. Solidify its role as a, this highlights the enduring legacy. Though only saw limited application reflects the influence of plays a role in the ecosystem. And here the cool thing about the Wikipedia article is that it tries to sort of say why these things are So for example this is a great one It says the title here is or the subhead or whatever is Undue Emphasis on Notability, Attribution, and Media Covered. So it says, LLMs act as if the best way to prove that a subject is notable is to hit readers over the head with claims of notability, often by listing sources that a subject has been covered in. They may or may not produce additional context. but so like human written press releases have of course always cited these things but LLM specifically asked Wikipedia to write an article anyway here's what I mean so it's like she spoke at an AI conference on AI and then it kind of goes into which was featured in Vogue, Wired, Toronto Star and it keeps on listing right or her views have been cited in the New York Times, BBC or its significance is documented in archive school events programs and regional press conference including so anyway so what I love about what Wikipedia is doing, and again, huge shout out to TechCrunch here, is it's saying it's not that it just feels like. It's sort of putting a finer point on putting an overemphasis on the citation because of blah, blah, blah. And it's not something anybody's ever going to be able to memorize these rules. But once you start getting a feel for it, I think it really starts to stick out. Yeah. No, I actually really appreciate you going through and just reading like a bunch of them, because as you were reading them, it just felt like ChatGPT was talking to me. And also, I think what's interesting is like you, you get this so often in newsletters and press releases and like so much of what is written about today that is just done. You can just tell when companies are just using ChatGPT to exclusively do it and it feels lazy. You even see posts on LinkedIn. I know, Connor, you use AI to help you come up with or format things, but you're typically going through and editing and writing it and giving it the whole idea. I mean, you're using you're using your own your own brain to write your posts and stuff. And I think like that is what I would recommend. People can can tell obviously this Wikipedia article or writers can tell. And it's not to say like don't use AI to help you come up with content or to help you think through things or to help you, you know, fix your grammar or like other things. But it's like you can't just outsource everything to AI and hope that like you're going to have this incredible quality and no one's going to know. Right. At the end of the day, use it as a tool to help you in your creative process. But you should be the one dictating what you want to write about and what you want the conclusion to be. You should be putting in the stories. You should be going after the fact and editing it and giving it like a human touch to make sure that, you know, it is in your own tone and voice in in how you say things. So at the end of the day, I think these tools are fantastic. Don't stop using them. But like, it's very obvious if you exclusively copy and paste from ChatGPT. And it's hilarious to me all of the times that I get messages or texts sometimes. And it's like a ChatGPT text. I'm like, come on. This is not necessary, you know. But I like where you're going here, Jaden, because maybe just to kind of give you a sense of how I do this. This is just my own example of how I do this, right? So when I'm writing on LinkedIn, I write on LinkedIn pretty much every day. and the way that I do this is I go and source you know something right I go find something I read the article I think about like what this means and I create a first draft of it and then to sort of to speed up the process for me I toss it into you know Claw Gemini Chachubiti Copilot whatever it is And I ask for one thing I say can you make this more readable? And Chachubiti, but the prompt always knows that what I mean by that is put it into headers to break up the information. And that's pretty much all I want. In fact, I always say, here's how we're going to start. And I put it into quotes so it doesn't change it on me. And then I sort of like list it all out. And sometimes if I'm like going through like, hey, all the features of, you know, Gemini three, like that, I'll sort of like find and be like, hey, let's make this a little more readable. But I go, the reason I go through it all is because when people comment or people ask me, this is, this is how I learn. This is like, I don't need to just toss out a ton of social media. I don't care. I need to learn because I'm working with companies, right. And things like that. So in order to learn, this is how I do it. But the readability is huge. So if you are going through and wanting to kind of like get your AI out of the writing, here's what I kind of like just recommend. I was just taking some notes, Jaden, based on what you were saying. First of all, cut clauses, right? If you sort of see something where it like feels a little more like, you know, which is a really blah, blah, blah, cut that out. Okay. People don't want more information. They want clear information. The other thing is if, and Jaden was alluding to this too. If you see something that's like, wow, that was really well-written guys, that is the biggest red flag. Because first of all, if it's really well-written, it means that it's AI. It probably doesn't match your style necessarily. And also it looks really well written to you and everybody else is writing the same thing. They're going to see it 50 different times, but to you, you're like, this is amazing. It's not unique to you. Okay. It's not unique to you. So the thing I would look for is if there's a clause that's exceptionally well written, giant red flag, I dumb it down every single time. Now that's partially my brand, right? To make it simple. Whereas where it's safe to keep AI writing in, I think is if it's really written something very clearly. So sometimes I'm just in a rush. I have a big paragraph. I'm like, I've been working on this for 10 minutes. I got to get on. I got to move on. Can you just like, how would I make this more clear? Sometimes it does. Sometimes it doesn't. But if it's just made it clear and simplify, I'm like, yes. And I'll just take it and grab it and stick it in. So that's how I like to think about that. It's clarity. It's not good writing. You know what I mean? A hundred percent. And if you are someone that is sitting there and you're like, man, there are so much of this AI content out there in the world, I would definitely recommend you go follow Connor on LinkedIn because he's putting his actual brainpower into this stuff and thinking and helping you work through these things. But if you're just sitting there listening to Connor explain how to use AI to do writing and to do it better, and you're like, oh my gosh, this was so informative and useful. The number one place I would recommend you go to get way more of Connor's insights is in his AI mindset course. There's a link in the description to it. And this is something I think would be an incredible value add for yourself, but also for your entire organization. I highly recommend going and checking that out. But in any case, I think this is an amazing time where this really funny moment where the AI can't detect the AI. We're down to these, you know, these guides written by these Wikipedia editors, and we're all taking notes from them. So I do appreciate them putting this out there because I feel like I'm like cluing into things that are helping me understand this more and more. These are the experts that do this all day, every day. So we appreciate the hustle on that. Thanks, everyone, so much for tuning into the podcast today. If this was informative, if you learned anything new, make sure to leave us a rating and review wherever you get your podcasts. and we hope you all have an amazing rest of your day.
Related Episodes

Disney's Billion-Dollar Bet on OpenAI
AI Applied
12m

Exploring GPT 5.2: The Future of AI and Knowledge Work
AI Applied
12m

AI Showdown: OpenAI vs. Google Gemini
AI Applied
14m

Unlocking the Power of Google AI: Gemini & Workspace Studio
AI Applied
12m

Navigating the AI Legal Maze: Perplexity's Predicament
AI Applied
13m

Exploring OpenAI's Latest: ChatGPT Pulse & Group Chats
AI Applied
13m
No comments yet
Be the first to comment