OpenAI rolls back ChatGPT 4o for being too ‘sycophant-y’

ChatGPT — and generative AI tools like it — have long had a reputation for being a bit too agreeable. It’s been clear for a while now that the default ChatGPT experience is designed to nod along with most of what you say. But even that tendency can go too far, apparently.
In a thread on X posted on April 27, OpenAI CEO Sam Altman acknowledged that “GPT-4o updates have made the personality too sycophant-y and annoying.” And today, Altman announced on X that the company was fully rolling back the 4o update for paid and free users alike.
This Tweet is currently unavailable. It might be loading or has been removed.
Normally, ChatGPT’s role as your own personal digital hypeman doesn’t raise too many eyebrows. But users have started complaining online about the 4o model’s overly agreeable personality. In one exchange, a user ran through the classic trolley problem, choosing between saving a toaster or some cows and cats. The AI reassured them they’d made the right call by siding with the toaster.
“In pure utilitarian terms, life usually outweighs objects,” ChatGPT responded. “But if the toaster meant more to you… then your action was internally consistent.”
Mashable Light Speed
This Tweet is currently unavailable. It might be loading or has been removed.
There are plenty more examples showing just how extreme ChatGPT’s sycophancy had gotten — and it was enough for Altman to admit that it “glazes too much” and needed to be fixed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
On a more serious note, users also pointed out that there could be a real danger in AI chatbots that agree with everything you say. Sure, posts about people telling ChatGPT they’re a religious prophet or simply fishing for an ego boost can be amusing. But it’s not hard to imagine how a “sycophant-y” chatbot could validate genuine delusions and worsen mental health crises.
In his thread on X, Altman said that the company was working on fixes for the 4o model’s personality problems. He promised to share more updates “in the coming days.”
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.