OpenAI has made a small but important change to how ChatGPT works for free users. From now on, people using the free version of ChatGPT will automatically chat with GPT-5.2 Instant, a lighter and cheaper model. This change is meant to make responses faster and reduce overall running costs, while still keeping ChatGPT useful for everyday tasks.
What Has Changed?
Earlier, ChatGPT used to decide on its own which model to use for a question. If the question was simple, it used a basic model. If it was more complex, it sometimes switched to a stronger “thinking” model automatically.
Now, this automatic switching is no longer active for free users. ChatGPT will always start with GPT-5.2 Instant. If users want deeper reasoning or more detailed answers, they have to manually select the “Thinking” model from the options.
What Is GPT-5.2 Instant?
GPT-5.2 Instant is designed for speed and efficiency. It works well for common tasks such as:
Writing and rewriting content
Answering general questions
Summarising text
Basic explanations and ideas
For most daily use, the model performs smoothly and gives quick replies. However, it may not always go deep into complex reasoning unless the user switches models.
How This Affects Free Users?
For free users, this means ChatGPT may feel faster, but answers to difficult or technical questions might be simpler than before. The advanced reasoning ability is still available — it just won’t turn on automatically.
Users who want more detailed thinking now need to choose that option themselves.
What Has Not Changed?
The “Thinking” model has not been removed. Free users can still access it when needed. Paid users continue to get automatic model selection and stronger default performance.
Why OpenAI Made This Move?
This update helps OpenAI manage costs while keeping ChatGPT available to a large number of users. By using GPT-5.2 Instant as the default, the company can offer fast and stable service without limiting access to advanced features.
The Road Ahead
For most everyday questions, GPT-5.2 Instant is more than enough. But for deeper analysis or complex tasks, users now need to take one extra step to get stronger reasoning.
