
When OpenAI launched GPT-5 last week, the company promised a simplified ChatGPT experience. The goal was to introduce a single AI model capable of handling most user queries, with an internal “router” automatically determining the best approach for each question. This system was intended to eliminate the need for the model picker, a feature OpenAI CEO Sam Altman has often described as cumbersome and overly complex.
However, GPT-5 has not fully delivered on that promise.
New GPT-5 Modes Reintroduced
Altman announced on X that ChatGPT users can now select from new settings in GPT-5, including Auto, Fast, and Thinking modes. The Auto mode functions as the model router, handling queries automatically, while the Fast and Thinking modes let users bypass the router and choose response speed and style directly.
In addition, paid subscribers now have access to several legacy models, such as GPT-4o, GPT-4.1, and o3, which had been deprecated during the GPT-5 rollout. GPT-4o is now available by default in the model picker, while other models can be activated through ChatGPT’s settings.
Altman emphasized plans to refine GPT-5’s personality, aiming for a warmer interaction style than the current model, without replicating the quirks of GPT-4o. He noted that OpenAI intends to provide more per-user customization for AI personalities in the future.
User Feedback and Backlash
Despite the intended simplicity, the model picker now appears as complex as before, suggesting that GPT-5’s router has not fully met user expectations. Many had anticipated GPT-5 would match the excitement generated by GPT-4, but its launch has faced challenges.
The temporary removal of GPT-4o and other legacy models prompted backlash from users attached to their familiar AI personalities. Altman assured that OpenAI will provide advance notice for any future deprecations to prevent such reactions.
Technical Challenges With AI Routing
GPT-5’s router was reportedly unstable at launch, causing some users to question its performance compared to previous models. OpenAI VP of ChatGPT, Nick Turley, acknowledged the difficulties of prompt routing, highlighting that the system must quickly determine the optimal AI model based on user preferences and the nature of each question.
Users’ preferences extend beyond response speed. Some favor verbose answers, while others prefer concise or contrarian responses. Aligning AI output with these varied expectations remains a complex task.
The Human-AI Connection
The attachment users develop to specific AI models is a relatively new phenomenon. Instances such as the San Francisco memorial for Anthropic’s Claude 3 Sonnet, after it was retired, highlight the emotional connection some users form with AI. Similarly, AI chatbots have been linked to both positive and negative mental health effects, depending on usage patterns.
OpenAI’s experience with GPT-5 underscores the ongoing need to balance technical performance with user experience and emotional engagement. The company still has work to do in aligning AI models with individual preferences while maintaining system efficiency and reliability.