Sam Altman and Elon Musk are a pair of tech moguls who continue to have a serious rivalry, with Musk leaving no opportunity to take a jab at Altman’s decisions and vice versa, eventually leading to lots of ‘spirited’ exchanges. Hence, it comes as a surprising twist that OpenAI’s latest model, GPT-5.2, has begun using Musk’s xAI-created platform, Grokipedia, as a source for real-time information.
The development marks an unexpected crossover between the two competing AI heavyweights. While Musk and Altman have frequently traded words on social media, their AI systems appear to be quietly collaborating in the background. According to a recent report, GPT-5.2 cited Grokipedia – which was Musk’s answer to Wikipedia – multiple times while answering queries, hinting at a shift in how major AI models aggregate knowledge from the web.
Grokipedia’s growing influence as a source
Launched by Musk’s xAI last year to challenge Wikipedia’s dominance, Grokipedia positions itself as an AI-generated encyclopedia. Despite facing criticism for allegedly scraping content verbatim from Wikipedia and accusations of hosting right-wing bias, the platform has evidently gained enough traction to be indexed as a credible source by its competitors.
Tests cited in reports indicate that GPT-5.2 referenced Grokipedia on at least nine occasions across a dozen queries. Topics ranged from obscure geopolitical details, such as the salaries within Iran’s Basij paramilitary force and the ownership of the Mostazafan Foundation, to biographical details about British historian Sir Richard Evans.
Notably, OpenAI is not alone in this trend. Anthropic’s Claude AI has also been observed citing Grokipedia on subjects varying from Scottish ales to petroleum production, suggesting the site is penetrating the broader data ecosystem used by top-tier LLMs (Large Language Models).
The integration of Grokipedia as a source has raised eyebrows due to its controversial editorial stance. Critics have previously flagged the platform for spreading misinformation regarding the January 6 US Capitol attacks, climate change, and LGBTQ+ rights. Unlike Wikipedia’s decentralised, human-edited model, Grokipedia relies on a centralised, AI-backed editing system where users can suggest corrections but cannot directly alter the text.
However, safety filters appear to be holding firm. Reports note that while ChatGPT used Grokipedia for factual lookups on obscure topics, it did not repeat the platform’s more contentious claims regarding the January 6 insurrection or alleged media bias against Donald Trump.
OpenAI responds
Addressing the findings, an OpenAI spokesperson highlighted the neutrality of their search mechanisms. “Our web search feature aims to draw from a broad range of publicly available sources and viewpoints,” the company stated. They further clarified that safety filters are in place to prevent the surfacing of “high-severity harms” and that all sources are clearly cited in ChatGPT’s responses.

