OpenAI is set to change the way ChatGPT can be used. The system will no longer provide specific medical, legal, or financial advice.
As reported by Nexta, ChatGPT is now officially an “educational tool,” not a “consultant.” The report explains the change as a result of growing regulations and liability fears.
The updated policies prohibit users from relying on ChatGPT for consultations requiring professional certification. This includes medical and legal advice, financial decision-making, or other high-stakes areas such as housing, education, migration, or employment without human oversight.
As reported by Nexta, the policy also restricts AI-assisted personal or facial recognition without consent and forbids actions that could lead to academic misconduct.
OpenAI says the changes are intended to “enhance user safety and prevent potential harm” from relying on the system beyond its intended capabilities.
AI to be an educational tool
Under the new rules, ChatGPT will only explain principles, outline general mechanisms, and direct users to qualified professionals. It will no longer provide specific medication names or dosages, generate lawsuit templates, or offer investment tips or buy/sell suggestions.
Users report that attempts to bypass the restrictions by framing requests as hypotheticals are now blocked by the system’s safety filters. The update comes amid public debate about the increasing number of people turning to AI chatbots for expert advice, especially in the medical field.
Unlike licensed professionals, conversations with ChatGPT are not protected by doctor–patient or attorney–client privilege, meaning chats could potentially be subpoenaed for use in court.
Recently, OpenAI also introduced new safety features to better support users in distress, focusing on mental health issues such as psychosis, mania, self-harm, and suicide, as well as emotional reliance on AI.
Nexta reported that ChatGPT will ‘no more naming medications or giving dosages… no lawsuit templates… no investment tips or buy/sell suggestions.’
ChatGPT can be helpful for explaining concepts, summarising information, or brainstorming ideas, but it has serious limitations when it comes to real-life decisions. Unlike a licensed therapist, it cannot read body language, feel empathy, or ensure your safety.
If you or someone you care about is in crisis, always reach out to trained professionals, such as dialing 988 in the US rather than relying on an AI. The same caution applies to finances and legal matters. ChatGPT can define terms like ETFs or explain basic tax rules, but it cannot consider your personal circumstances, risk tolerance, or specific regulations.
Using it to draft legal documents or financial plans carries real-world risks, including errors that could be costly or legally invalid.
High-stakes emergencies are also outside its abilities. It cannot detect gas leaks, alert authorities, or provide real-time updates. While ChatGPT can access web data, it does not monitor events continuously, and its outputs can contain mistakes, including misreported statistics or outdated information.
Users should never share confidential or sensitive data, such as financial records, medical charts, or private contracts, since storage and access are not guaranteed.
