AI hallucination will get minimised by next year

Hallucination happens when the generative AI model provides false information

“I'm very convinced that we're going to largely eliminate hallucinations over the next year,” Mustafa Suleyman, co-founder, Inflection AI and DeepMind, said
“I'm very convinced that we're going to largely eliminate hallucinations over the next year,” Mustafa Suleyman, co-founder, Inflection AI and DeepMind, said

AI hallucinations, a term for misleading results that emerge from large amount of data that confuses the model, is expected to be minimised to a large extent by next year due to cleansing of data, IT executives said on Tuesday. They said that this would result in availability of more accurate content.

“I’m very convinced that we’re going to largely eliminate hallucinations over the next year,” said Mustafa Suleyman, co-founder, Inflection AI and DeepMind, in a virtual interaction with Debjani Ghosh, president, Nasscom.

Hallucination happens when the generative AI model provides false or misleading information in response to queries from users.

According to Ganesh Natarajan, chairman of 5F World, AI models need clean data. “AI needs good data identification, data capturing, data injection and data analysis, which almost every company would be doing. Once it gets clean data, as and when it needs it, only then AI can bring in the benefits,” Natarajan said.

Follow us on TwitterFacebookLinkedIn

Get live Share Market updates, Stock Market Quotes, and the latest India News
This article was first uploaded on February twenty-one, twenty twenty-four, at zero minutes past eleven in the morning.
X