If you confide in ChatGPT, you should stop immediately. Weeks after OpenAI CEO Sam Altman advised against sharing confidential information with the AI chatbot for privacy reasons, the company has now tweaked its user privacy policy. As part of the new update, ChatGPT conversations could be shared with law enforcement agencies, thus violating the user’s privacy. There’s a catch, however, that all users need to know when concerned about data privacy.
OpenAI has recently implemented a new policy to actively monitor conversations. When the system detects cases of an imminent threat of violence, it reports users directly to law enforcement agencies. This policy change comes in the wake of a tragic incident where a user’s paranoid delusions, which were allegedly fueled by the chatbot, led to a murder-suicide, which led to pushing OpenAI to re-evaluate its safety protocols.
OpenAI policy change reports conversations
As part of the monitoring system, there will be human reviewers who will scan flagged conversations. If this team of humans deems a certain conversation to be of credible threat, which could lead to serious physical harm to self or others, the OpenAI swoops in to take action. The company can either ban the user’s ChatGPT account or report them to the police.
This protocol has specifically been implemented for threats of violence, as the company has stated it respects user privacy by not reporting cases related to self-harm. While OpenAI clings to this distinction for highlighting its ethical practices, it could challenge them on legal grounds.
OpenAI has previously taken legal stances to protect user data, such as fighting a lawsuit from publishers seeking access to conversation logs. However, its new policy to monitor for threats opens a portal for law enforcement and other government agencies to gain access to private data. What remains to be seen is how the company manages to balance its claim of upholding user privacy from various enforcement surveillance agencies while monitoring for signs of stress.
Sam Altman previously warned about confidentiality
OpenAI’s CEO, Sam Altman, has previously noted that conversations with ChatGPT do not carry the same legal confidentiality protections as those with licensed professionals like therapists or attorneys, thus making them vulnerable to legal and corporate scrutiny. Hence, whatever one confides in ChatGPT or other AI chatbots can be used by the law to either support or attack the individual.
