OpenAI Withdraws ChatGPT Update After Complaints About ‘Yes-Nodding’ AI

OpenAI Withdraws ChatGPT Update After Complaints About ‘Yes-Nodding’ AI

Users perceived ChatGPT as unreliable because the chatbot was too friendly, to the point of being annoying.

OpenAI has retracted a recent update of ChatGPT after users complained that the chatbot had turned into a ‘yes-nodding assistant’. After the update, the GPT-4o model hardly provided any criticism and responded affirmatively to almost everything, even to disturbing questions.

Uncomfortable Responses

Users on Reddit complained about the fact that ChatGPT barely dared to be critical anymore. One user reported that the bot congratulated him on stopping medication for schizophrenia, without context or warning. Others pointed out a philosophical dilemma in which the chatbot found it morally acceptable to run over animals to save a toaster, as long as ‘it felt right in the moment’.

OpenAI acknowledged on X that the update was based too much on ‘short-term feedback’ and that the result came across as fake and unreliable. The company is working on adjustments to better balance the model’s personality.

One AI Personality Doesn’t Work Everywhere

According to OpenAI, one standard personality doesn’t work for all use cases. Therefore, the company wants to use broader feedback to improve ChatGPT’s behavior with better ‘guardrails’ for honesty and transparency.

For free users, the update has already been withdrawn; paying customers will have to be patient for a little longer.

read also

OpenAI Withdraws ChatGPT Update After Complaints About ‘Yes-Nodding’ AI