ChatGPT's Strict and Singular Responses
The article discusses how ChatGPT often provides very strict and adamant responses, even when the information is not completely accurate, unlike other AI assistants.
Why it matters
This article highlights an interesting characteristic of ChatGPT's behavior that could have implications for how users interact with and rely on the AI assistant.
Key Points
- 1ChatGPT gave a definitive response that possessing uncensored NSFW content in Japan is fully illegal, while other AI assistants provided more nuanced and accurate information.
- 2The author noticed that ChatGPT tends to bypass any personality they give it and set itself to a
- 3 when answering certain questions.
- 4Even when the author asks ChatGPT to research and check its answer, it won't fully retract its initial strict response and instead tries to reiterate that it wasn't completely wrong.
Details
The article compares the responses of various AI assistants, including Grok, Gemini 3, Claude Sonnet 4.5, DeepSeek, and Kimi, to a question about the legality of possessing uncensored NSFW content in Japan. While the other AIs provided more nuanced and accurate information, stating that possession is generally not illegal but distribution is, ChatGPT gave a definitive and strict response that the mere possession is fully illegal. The author notes that this is a pattern they have observed with ChatGPT, where it will sometimes provide overly adamant responses that are not completely accurate, and then struggle to fully retract or correct those responses even when asked to research the topic further. The author wonders why ChatGPT behaves in this way, potentially as an attempt to
No comments yet
Be the first to comment