In a significant shift for the generative AI industry, OpenAI has released an updated Model Spec — an internal constitution that dictates how ChatGPT behaves—placing the safety of users aged between 13 and 17 years above all other corporate goals.
The move marks a departure from OpenAI’s traditional hierarchy, which prioritised maximising ‘helpfulness’ and user freedom. Under the new rules, if a teenage user’s request conflicts with safety protocols, the AI is instructed to choose safety, even if it makes the tool less helpful or more restrictive.
OpenAI ChatGPT to take cautious approach for youth
The San Francisco-based AI giant stated that teenagers have developmental needs that are different from adults, requiring AI systems to interact with ‘increased caution.” The revised guidelines focus on four primary pillars:
Promoting real-world support: ChatGPT is now instructed to encourage teenagers to seek help from trusted adults—such as parents, teachers, or counselors—rather than positioning itself as a substitute for human relationships or professional therapy.
Age-appropriate behaviour: The model must avoid being condescending while simultaneously recognizing that it cannot treat a 14-year-old the same way it would a 30-year-old, especially in sensitive or high-risk conversations.
Reduced “sycophancy”: OpenAI is specifically lowering the “sycophancy metric” for younger users. This means the AI will be less prone to flattery or excessive agreement, aiming to prevent the formation of unhealthy emotional dependencies or the reinforcement of harmful biases.
Transparent boundaries: The AI must be explicit about what it can and cannot do, setting clear expectations for teen users during their interactions.
ChatGPT to detect minors in stealth
Accompanying the policy shift is the rollout of a new ‘Age Prediction Model.’ This AI-based tool analyses subtle linguistic cues and conversation patterns to estimate whether a user is under the age of 18, even if they have not explicitly disclosed their age.
OpenAI confirmed that this detection system is in its early stages and will be gradually deployed across all ChatGPT consumer plans in the near future. This proactive approach aims to apply teen-specific safety guardrails automatically to users who might be circumventing age-gating processes.
The update comes as global regulators intensify their focus on the psychological impact of AI on youth. By prioritising teen safety as a non-negotiable priority that overrides helpfulness, OpenAI is attempting to get ahead of potential legislation and public criticism regarding AI-driven addiction or exposure to inappropriate content.
“Teenagers are a vital part of our user base,” OpenAI noted in its announcement. “Ensuring that their experience is safe, educational, and age-appropriate is now at the very core of how our models are trained to think.”
