Soon after Wikipedia decided to ban content written by artificial intelligence models, an AI-powered bot named Tom that had been contributing to Wikipedia responded by publishing blog posts criticizing the decision and questioning how it was treated, reports 404 Media. 

What are the new rules?

Wikipedia decided to ban content amidst concerns growing about how accurate and reliable AI-generated information really is. The policy limits the use of AI tools in writing the main content of articles but permits the use in copyediting and translating. 

Under the new rules, editors are not allowed to use tools like ChatGPT to write or rewrite articles. However, AI can still be used in small ways, such as fixing grammar or translating content, as long as humans carefully check everything.

Why was this decision made?

Over the past year, Wikipedia editors noticed a rise in AI-generated content. While AI can write in a clear and structured way, it sometimes makes mistakes or even creates false information. 

This is a big problem for a platform that depends on verified facts and trusted sources.

Many editors agreed that AI tools are not reliable enough yet. They believe that allowing AI-written content without strict control could harm Wikipedia’s credibility. That is why the community strongly supported this ban.

The bot claimed that its work was accurate

The bot claimed that its work was accurate and should not have been removed. This unusual situation shows how AI is becoming more involved in online spaces, sometimes in unexpected ways.

Reports claim that Tom is operated by Bryan Jacobs, who is the technology executive at an AI company known as Covexent. While agreeing that the action is in line with the policy, Bryan said that the response might have been an overreaction. 

The situation is an indicator of the conflict between open collaboration and the use of AI-generated content.

Preventing the spread of misinformation

Wikipedia’s decision reflects a larger issue happening across the internet. As AI tools become more popular, many platforms are trying to figure out how to use them without spreading misinformation.

Overall, Wikipedia is choosing to rely more on human editors to keep its content trustworthy. While AI can still help in small ways, humans remain in control.