Wikipedia Bans AI in Article Writing
In a significant move, Wikipedia has prohibited its 260,000 human editors from using artificial intelligence to create articles. This decision comes amidst a rising tide of what some are calling “AI slop” cluttering the internet.
The new rule, which was voted on by volunteers from the Wikimedia Foundation, specifically bans the use of large-scale language models like ChatGPT for creating content. The main reasons behind this ban relate to concerns over accuracy, sourcing, and reliability.
According to Wikipedia’s leaders, AI-generated text often goes against the core principles of the site, including strict standards for verifiability and neutrality. This is largely because such text can lead to “illusions”—misleading information, broken links, or references that don’t exist.
While editors are now restricted from using AI for large content creation, they can utilize it in limited ways, such as translating articles from various languages or suggesting minor edits. However, these changes must still be reviewed and approved by a human before being published.
Last year, Wikipedia introduced guidelines to help editors identify AI writing. This involves spotting potential red flags, like inaccurate quotes, overused phrases, and sudden changes in writing style. If any suspicious content is detected, it undergoes review, allowing other editors to challenge or remove it if necessary.
Ilyas Lebleu, a volunteer editor from France and a founding member of the WikiProject AI Cleanup team, noted that many articles appeared to be written in styles unlike those typically found on the site. He pointed out his concerns about the increasing presence of these AI-generated pieces.
Wikipedia co-founder Jimmy Wales has also expressed disappointment with current AI models, labeling them unreliable and the situation a “chaos,” suggesting that these technologies aren’t ready to take over tasks that human editors perform.
This policy revision stems from extensive discussions among Wikipedia moderators, who reached a vote of 40-2 in favor of the ban.
Lebleu, who is known on the platform as Chaotic Enby, stated that the change was necessary due to the rising number of AI-generated articles that editors had no jurisdiction over. He described a sense of shifting moods—from cautious optimism to more palpable anxiety about the future.
However, many within the Wikipedia community actively worry that the influence of AI might have already gone too far. Data indicates that ChatGPT has exceeded Wikipedia in terms of monthly pageviews, leading to an 8% decline in Wikipedia’s user traffic from late 2025 compared to the previous year.
Furthermore, use of ChatGPT has surged, seeing a 36% increase in users from late 2023 to early 2024. In contrast, other platforms have reported only minor changes in user engagement.
This shift feels ironic for a 25-year-old platform that has long positioned itself as a trusted source of information, ironically serving as a training resource for the AI models backing ChatGPT.
In a conversation regarding these developments, Lebleu highlighted that the implications are far-reaching, suggesting that Wikipedia could just be the starting point for a larger reckoning across various online communities over the role of AI.
“As fears about an AI bubble intensify,” he speculated, “we could see a domino effect where other platforms decide how to handle AI themselves.”




