AI hallucinations, a term for misleading results that emerge from large amount of data that confuses the model, is expected to be minimised to a large extent by next year due to cleansing of data, IT executives said on Tuesday. They said that this would result in availability of more accurate content.
“I’m very convinced that we’re going to largely eliminate hallucinations over the next year,” said Mustafa Suleyman, co-founder, Inflection AI and DeepMind, in a virtual interaction with Debjani Ghosh, president, Nasscom.
Hallucination happens when the generative AI model provides false or misleading information in response to queries from users.
According to Ganesh Natarajan, chairman of 5F World, AI models need clean data. “AI needs good data identification, data capturing, data injection and data analysis, which almost every company would be doing. Once it gets clean data, as and when it needs it, only then AI can bring in the benefits,” Natarajan said.
Follow us on Twitter, Facebook, LinkedIn