Artificial intelligence chatbots, increasingly used by internet users to verify viral images, are struggling to accurately detect fabricated visuals, even when these images are created by the same AI systems, according to multiple instances documented by AFP.
In one case from the Philippines, users sought to confirm the authenticity of a viral image showing former lawmaker Elizaldy Co, facing corruption charges, supposedly in Portugal. Google’s new AI mode incorrectly labelled the image as genuine. AFP later traced the photo to its creator, who confirmed it was generated using Google’s AI image tool.
AI lacks visual judgment
Experts told AFP that these failures highlight the limitations of current AI models, which are primarily trained on text rather than nuanced visual analysis. “These models are trained primarily on language patterns and lack the specialised visual understanding needed to accurately identify AI-generated or manipulated imagery,” Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.
Similar errors were observed during protests in Pakistan-administered Kashmir, where a fake image of demonstrators carrying flags and torches was circulated online. Despite being created using Google’s Gemini model, both Gemini and Microsoft’s Copilot incorrectly verified it as an authentic protest image, the report said.
Researchers warn that as users increasingly depend on AI chatbots instead of traditional search engines for verification, the risk of misinformation intensifies. A Columbia University study cited by AFP found that seven AI models, including ChatGPT and Gemini, failed to correctly identify the origins of real journalistic photographs.
Fact-checking gap widens
The issue is compounded by the scaling back of human fact-checking efforts. Meta recently ended its third-party fact-checking programme in the US, shifting towards a crowd-based system called ‘Community Notes’. However, with programs like Nano Banana and ChatGPT being used routinely, the dilemma would not be access to technology but accountability.
