ChatGPT’s Voice Mode could lead to unintended social bonds with AI: OpenAI

OpenAI- the maker of popular chatbot ChatGPT has warned about its new Voice Mode feature for ChatGPT, noting that it could lead users to form social bonds with the AI model.

OpenAI reports that AI is being used by cybercriminals
OpenAI reports that AI is being used by cybercriminals

The potential drawbacks of OpenAI’s human-like AI are numerous, and this self-critique highlights a significant concern: while such AI can provide companionship and assistance, it might also unintentionally impact real-world relationships and social norms ultimately altering the quality and nature of human interactions.

OpenAI- the maker of popular chatbot ChatGPT has warned about its new Voice Mode feature for ChatGPT, noting that it could lead users to form social bonds with the AI model. This caution was included in the company’s System Card for GPT-4o that lay out a comprehensive assessment of the AI model’s potential risks and safety measures. Among the identified risks, the concern about users anthropomorphising the chatbot and developing emotional attachments has been highlighted.

“Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models. This risk may be heightened by the audio capabilities of GPT-4o, which facilitate more human-like interactions with the model,” notes the blog post.

OpenAI conducted extensive safety evaluations to address potential risks associated with GPT-4o, particularly focusing on its new audio capabilities. During this evaluation, the AI model was subjected to rigorous red teaming by over 100 experts from diverse linguistic and geographic backgrounds. These evaluations spanned several phases, from early development to real-world usage in iOS applications.

During early testing, there were instances where users seemed to develop connections with the model, using language that implied a bond shared with AI, such as “This is our last day together.” OpenAI warns that while these interactions appear harmless at present, they highlight a need for further investigation into how such effects might evolve over extended periods.

The blog highlights that interacting with an AI model like this could impact how people interact with each other. For example, some users might form bonds with the AI and feel less need for human contact. While this could help people who feel lonely, it might also affect their real-life relationships.

The chatbot company warns that spending a lot of time with the AI might change social norms. To avoid this, ChatGPT models let users interrupt and take control of the conversation anytime to avoid such bond formation.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on August nine, twenty twenty-four, at six minutes past four in the afternoon.
Market Data
Market Data