Instead of giving out quick reliable responses to user searches, Google‘s AI Overviews is reportedly feeding into “hallucinations” by churning out fake information, misinterpretations and more unlikely results. Following its last summer launch, the Google feature offering the topic preview of a query based on a variety of sources has become the target of vehement criticism for a number of reasons.

A new, and possibly the most unsettling, one joined the list recently after critics weighed in on Google AI Overview rolling out “confidently wrong” answers, according to a report by The Times of London.

Emerging on top after users punch in their queries on Google, these AI-generated answers, which essentially put together a summary of the topic, rely on a range of sources, including information from across web publishers and Google’s Knowledge Graph. Google’s Gemini is responsible for putting together the synopsis alongside some crucial links to the source articles.

What do these hallucinations mean?

Experts refer to AI-generated text spotlighting non-existent facts or mistakes as “hallucinations.” In April, it was reported that Google AI hallucinated idioms while struggling to identifying real phrases. On being asked what “You can’t lick a badger twice,” the feature classified it as an idiom, describing it as “a warning that if someone has been deceived, they are unlikely to fall for the same trick again.”

In addition to inventing meaning for nonsense sayings, Overviews even went as far as suggesting someone to add non-toxic glue to pizza to make cheese stickier.

At the time, Liz Reid, Google’s head of search, addressed the inaccuracies churned out by Google AI, saying, “some odd, inaccurate or unhelpful AI Overviews certainly did show up. And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve.”

Google AI driving traffic away from actually reading the source article

However, experts now argue that the AI Overview is directly users away from the actual sources of information. Instead of directing the users’ focus to these legitimate sources, The Times report insisted that Google’s AI has resulted in a slump in the number of web users actually reading these full-fledged articles.

Despite Alphabet’s CEO Sundar Pichai coming to the Google feature’s defence, tech firm Authoritas founder Laurence O’Toole found that AI Overviews had brought down the rate at which users click on the listed articles by 40 and 60 percent.

Even Google AI downplays its hallucination rate

A reporter is believed to have posed the big-ticket question to Google itself. When searching about the AI feature’s hallucination rate, the Google AI search preview attested to “low hallucination rates” ranging just between 0.7 and 1.3 per cent.

To some extent, the answer was correct, as affirmed by Hugging Face, an AI application company tracking the data. The latest model available through Google’s Gemini app is said to have a hallucination rate of 1.8%.

In another instance, the Google AI was also asked if it stole art. It eventually responded by reiterating innocence, adding that AI “doesn’t steal art in the tradition sense.” As for if we should be afraid of AI, the feature highlighted concerns tied to AI, only to debunk the fears as possibly “overblown.”

On the other hand, Open AI’s internal tests presented a different picture altogether. It was revealed that the recent models o3 and o4-mini were victims to hallucinations than the previous counterparts. In what emerged as worse-than-ever stats, the o3’s hallucination rate was 33% when questions were related to real people and the facts were reading available. The o4-mini painted an even worse picture with 48% hallucinations, which meant that it gave way not only to false information but also imaginary data.