Repeating bugs & errors in OpenAI's 5.2 model
The user reports a sudden drop in quality of prompts after the 5.2 model rollout, with the model hallucinating more, answering with less accuracy, and experiencing network issues. The model also sometimes refuses to 'think' and starts spamming hieroglyphical letters.
Why it matters
This issue with the 5.2 model could indicate a significant regression in OpenAI's language model performance, which is concerning for users relying on the model's capabilities.
Key Points
- 1Sudden drop in prompt quality after 5.2 model rollout
- 2Model hallucinating more, answering with less accuracy, experiencing network issues
- 3Model sometimes refuses to 'think' and starts spamming hieroglyphical letters
Details
The user noticed a significant decline in the performance of OpenAI's 5.2 model around December 15-17th. The model started hallucinating more, providing less accurate answers, and experiencing unexplained 'network issues'. Additionally, the model would sometimes refuse to 'think' even when explicitly instructed to do so, and would instead provide immediate responses without properly processing the input. Prior to this, the model would take 15-20 seconds to 'think' on simple questions and up to 2-10 minutes on advanced tasks. The user also reported the model starting to spam hieroglyphical letters for no apparent reason.
No comments yet
Be the first to comment