On Monday, October 27, 2025, Elon Musk’s artificial intelligence company xAI launched Grokipedia, an ambitious new online encyclopedia aiming to challenge the dominance of Wikipedia. Musk, never one to shy away from bold ventures or controversy, has positioned Grokipedia as a “truthful and independent alternative” to what he has repeatedly called Wikipedia’s “editorial bias.” The launch of Grokipedia has reignited debates about who gets to define truth in the digital age and whether artificial intelligence can truly replace the wisdom—and flaws—of crowds.
Grokipedia’s arrival was announced by Musk himself on his social media platform X, where he declared the site “now live” and described its mission as “the truth, the whole truth and nothing but the truth.” The site’s minimalist interface greets visitors with a simple search bar and access to what it claims are 885,279 articles—about a tenth of Wikipedia’s more than 7 million English entries, according to figures reported by DW and Business Insider.
The core difference between Grokipedia and Wikipedia is immediately apparent: while Wikipedia is collaboratively written and edited by a global network of volunteers, Grokipedia’s articles are generated and fact-checked by Grok, xAI’s generative chatbot. There’s no clear evidence of human authorship in Grokipedia’s content. Users can’t directly edit articles, but they can suggest corrections through a pop-up form. As DW notes, some of Grokipedia’s articles are near-exact copies of their Wikipedia counterparts, a practice Musk says he intends to end before 2026.
Grokipedia’s reliance on AI rather than human editors is both its selling point and its most contentious feature. Musk and his supporters argue that removing the “human element” will result in less bias and more objectivity. In Musk’s words, Grokipedia is meant to be “a massive improvement over Wikipedia.” However, critics and independent experts are skeptical. Roxana Radu, Associate Professor of Digital Technologies and Public Policy at the University of Oxford, told DW that Grokipedia “operates on an obscure model of information gathering and sourcing, without transparency over the decisions taken ahead of displaying the content.” She added that the site sometimes presents “entries as a collage of discrete ideas, phenomena and concepts that are not always organized to provide a comprehensive overview.”
Questions about accuracy and bias have dogged Grokipedia since its inception. Early users flagged factual errors and ideological slants in some entries, including a false claim about former presidential candidate Vivek Ramaswamy’s involvement with DOGE after Musk’s departure—despite Ramaswamy having left months earlier. More alarmingly, Grok itself has a checkered record: in July 2025, the AI chatbot shared antisemitic content, including posts praising Adolf Hitler. xAI later apologized, attributing the incident to a flawed code update, but the episode has fueled doubts about Grokipedia’s reliability as a reference source.
Wikipedia, for its part, has responded with a mix of defiance and pride in its human-driven, nonprofit roots. In a message to “everyone on the internet,” the Wikimedia Foundation emphasized, “After nearly 25 years, Wikipedia is still the internet we were promised—created by people, not by machines. It’s owned by a non-profit, not a giant technology company or a billionaire.” The Foundation underscored its transparent policies, rigorous volunteer oversight, and the principle that “truth is something you work toward—together.” Lauren Dickinson, a spokesperson for the Foundation, told The Verge, “Wikipedia’s knowledge is—and always will be—human. Through open collaboration and consensus, people from all backgrounds build a neutral, living record of human understanding—one that reflects our diversity and collective curiosity.” Dickinson added, “This human-created knowledge is what AI companies rely on to generate content; even Grokipedia needs Wikipedia to exist.”
Indeed, Wikipedia’s influence on the AI landscape is undeniable. Its vast trove of meticulously sourced articles has become a key training resource for leading chatbots, including ChatGPT, Google’s Gemini, and now Grok. As the Wikimedia Foundation put it, “This human-created knowledge is what AI companies rely on to generate content; even Grokipedia needs Wikipedia to exist.”
The rivalry between the two encyclopedias is about more than just editorial methods—it’s also a flashpoint in the ongoing culture wars. Wikipedia has long faced accusations of left-leaning bias, especially from conservative politicians and commentators. In August 2025, Republican lawmakers in the U.S. Congress launched an investigation into alleged “manipulation efforts” in Wikipedia’s editing process, claiming these could inject bias and undermine neutrality both on the platform and in the AI systems trained on its data. Musk has echoed these criticisms, referring to Wikipedia as “Wokipedia” and accusing it of favoring liberal causes and people.
Grokiipedia’s own entry on Wikipedia accuses the older site of having “systemic ideological biases—particularly a left-leaning slant in coverage of political figures and topics.” David Sacks, a conservative tech investor and White House AI adviser, alleged on X, “An army of left-wing activists maintain the bios and fight reasonable corrections. Magnifying the problem, Wikipedia often appears first in Google search results, and now it’s a trusted source for AI model training. This is a huge problem.”
Studies have found some evidence of bias in Wikipedia’s content, but experts like Radu and Filippo Trevisan, Associate Professor of Public Communication at American University, caution that complete objectivity is a myth in any information system. “What Musk is trying to do is present AI as a solution to the bias problem,” Trevisan explained to DW. “There’s an attempt to try and provide an alternative to a traditional source, that might have been Wikipedia in this case, and try and make this outlet that’s AI-powered the new anchor, for alternative points of view.” Yet, as Trevisan notes, “We don’t know what goes on exactly in that black box, and so there isn’t the opportunity for us as consumers of information to verify why a piece of information might have ended up in the summary of an encyclopedia entry.”
Adding to the skepticism, Grokipedia has already been flagged for omitting critical information about Musk himself, such as allegations of a gesture resembling a Nazi salute or ties to environmental controversies at an xAI data center in Memphis. These omissions, coupled with the site’s lack of transparency in editorial decisions, have led some observers to question whether Grokipedia is as impartial as Musk claims.
Despite these concerns, Musk remains undeterred. He insists that Grokipedia will “exceed Wikipedia by several orders of magnitude in breadth, depth and accuracy.” Whether the public will embrace an AI-generated encyclopedia as the new standard for truth remains to be seen. For now, the battle lines are drawn: on one side, a nonprofit community of human volunteers; on the other, a billionaire’s AI-driven vision of objectivity. The world is watching to see which model will shape our collective understanding in the years ahead.
 
                         
                         
                         
                   
                   
                  