OpenAI’s ChatGPT 4, a powerful AI-based language model that can generate text, write essays, length articles and other creative content, is now also able to analyse images and give vivid details of it. This new capability has alarmed OpenAI officials, who are concerned this could lead to its potential misuse.
Jonathan Mosen, a blind man who has been using assistive technology since a long time, was chosen to test the new feature of ChatGPT 4 that allowed him to analyse images. The feature worked so well for him that it has made OpenAI, the company that developed ChatGPT 4, a bit worried.
Mosen was recently on a trip when he used the visual analysis feature to identify the shampoo, conditioner, and shower gel in his hotel room bathroom. He was also able to determine the milliliter capacity of each bottle and the type of tiles in the shower. Mosen was given early access to the visual analysis by Be My Eyes, a startup that connects people who are blind with volunteers for help. The startup this year joined hands with OpenAi to test the company’s visual capabilities.
The visual analysis feature is still in beta testing, but OpenAI is already concerned about the potential risks. The company is worried that the feature could be used for facial recognition, which could raise privacy concerns.
ALSO READ l Samsung is not ditching Google yet but may add ChatGPT to its own browser: Details
The technology developed by the company has the capability to identify public figures, primarily those with a Wikipedia page. However, it is not as comprehensive as tools specifically designed for facial recognition on the internet.
There is also a fear that making this feature widely accessible could lead to legal implications especially in jurisdictions where it is mandatory to obtain consent from people before using their biometric information including facial features.
OpenAI is also concerned that the tool could end up giving inappropriate assessments about people’s faces. Sandhini Agarwal, an OpenAI policy researcher, expressed the company’s intention to engage in a two-way conversation with the public. If the feedback indicates that people do not want such a feature, OpenAI is fully willing to accommodate those concerns.
Agarwal also mentioned that OpenAI’s visual analysis has the potential to generate “hallucinations” similar to those observed with text prompts. “If you give it a picture of someone on the threshold of being famous, it might hallucinate a name,” she said. “Like if I give it a picture of a famous tech C.E.O., it might give me a different tech C.E.O.’s name.”
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook.
