Today : Oct 24, 2025
Technology
24 October 2025

Google Faces $15 Million Lawsuit Over AI Defamation

Robby Starbuck alleges Google’s AI tools spread false and damaging claims, raising urgent questions about accountability for AI-generated misinformation.

On October 22, 2025, conservative activist Robby Starbuck launched a $15 million defamation lawsuit against Google in Delaware Superior Court, accusing the tech giant’s artificial intelligence tools of generating and spreading false, damaging claims about him. The suit, which has quickly drawn national attention, highlights the growing legal and ethical storm swirling around AI-generated content and its real-world consequences for individuals and companies alike.

According to The Wall Street Journal, Starbuck’s complaint centers on several instances where Google’s AI systems—including its Bard and Gemma chatbots—allegedly produced and disseminated statements labeling him a "child rapist," "serial sexual abuser," and "shooter." The lawsuit details how, in December 2023, Bard falsely connected Starbuck with white nationalist Richard Spencer, citing fabricated sources. Then, in August 2025, Google’s Gemma chatbot reportedly went even further, accusing Starbuck of sexual assault, spousal abuse, participation in the January 6 Capitol riots, and even linking him to the infamous Jeffrey Epstein files—again, all based on fictitious or nonexistent sources.

Starbuck, known for his outspoken campaigns against corporate diversity, equity, and inclusion (DEI) initiatives, claims these AI-generated statements have caused real harm. He asserts that some people believed the false accusations, leading to increased threats against his life. He even referenced the recent assassination of fellow conservative activist Charlie Kirk to underscore the potential dangers. In his words, “No one—regardless of political beliefs—should ever experience this. Now is the time for all of us to demand transparent, unbiased AI that cannot be weaponized to harm people.”

This isn’t Starbuck’s first legal tussle with a tech heavyweight over AI-generated misinformation. In April 2025, he filed a similar lawsuit against Meta Platforms, alleging that its AI falsely claimed he had participated in the January 6 Capitol attack and had been arrested for a misdemeanor. That case concluded in August with a confidential settlement, but not before Meta hired Starbuck as an advisor on AI bias issues—an outcome that suggests Starbuck’s legal strategy may be as much about gaining influence over AI policy as it is about seeking damages.

Google, for its part, has responded with a mix of technical candor and corporate defense. Company spokesperson José Castañeda acknowledged the problem, stating, “Hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimize. But as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.” He added that the defamatory claims in question “mostly deal with claims related to hallucinations in Bard that we addressed in 2023.” Castañeda also emphasized that Google attempted to resolve the matter before litigation, noting, “We did try to work with the complainant’s lawyers to address their concerns.”

Google’s defense leans heavily on the technical realities of large language models (LLMs), the backbone of modern AI chatbots. As outlined in research published by OpenAI on September 4, 2025, so-called “AI hallucinations”—when a model generates plausible but false information—are an inherent statistical limitation of these systems. Unlike humans, LLMs don’t truly “know” facts; they predict what text should come next based on patterns in their training data. This can lead to confident-sounding but entirely fabricated statements, especially when users deliberately craft prompts to elicit misleading answers.

“We know LLMs aren’t perfect, and hallucinations are a known issue, which we disclose and work hard to minimize,” Castañeda reiterated in a statement posted at 8:02 PM on October 22, 2025. Google also pointed to an independent study—though it didn’t specify which one—claiming, “We have the least biased LLM among competitors.” Whether this claim will sway the court or the public remains to be seen.

The legal landscape for AI defamation is, in many ways, uncharted territory. As The Wall Street Journal notes, no U.S. court has yet awarded damages in a defamation suit involving an AI chatbot. In 2023, conservative radio host Mark Walters sued OpenAI after ChatGPT allegedly linked him to fraud and embezzlement. The courts sided with OpenAI, ruling that Walters failed to prove “actual malice”—a key standard in defamation law that requires showing the defendant knowingly made false statements or acted with reckless disregard for the truth. Applying this standard to algorithmic systems, which operate without human intent, poses a thorny legal challenge.

Starbuck’s case is further complicated by the timing and the broader context. The complaint comes as Google faces mounting scrutiny over its AI content practices. On September 12, 2025, Penske Media Corporation filed a sweeping federal antitrust lawsuit accusing Google of coercing publishers to provide content for its AI products without fair compensation. Separately, a federal jury in San Francisco awarded $425.7 million against Google on September 3, 2025, for privacy violations—though that case involved data tracking, not AI-generated content.

The issue of AI hallucinations isn’t confined to search engines or chatbots. On August 14, 2025, an Arizona federal court sanctioned an attorney whose legal brief contained multiple AI-generated citations to non-existent cases. The sanctions included revocation of her pro hac vice status and mandatory notification to state bar authorities, signaling that courts are taking the accuracy—and potential dangers—of AI-generated information seriously.

Meanwhile, pressure is mounting from other quarters. In August, a bipartisan coalition of 44 state Attorneys General sent a formal letter to twelve major AI companies, including Google, Meta, OpenAI, and Apple, demanding stronger protections for children against predatory AI products. The marketing and advertising industries are also watching closely, as brand safety concerns now extend to the risk that ads might appear alongside false or defamatory AI-generated content.

At its heart, Starbuck’s lawsuit is a test case: can technology companies be held liable for false information generated by AI systems, especially when those systems are marketed as providing accurate answers? Google argues that, despite its safeguards, no AI system is immune to manipulation by determined users. Starbuck and his supporters counter that the scale and impact of AI-generated misinformation—especially when it targets individuals—demands new legal and ethical standards.

The Delaware Superior Court, known for handling complex corporate litigation, now faces the task of determining whether existing defamation frameworks are adequate for the AI age. The outcome could set a precedent for future cases, influencing how tech companies design, market, and police their AI systems. If Starbuck secures an advisory role at Google, as he did with Meta, it could embolden other activists to pursue similar strategies. If, on the other hand, Google prevails, it may discourage future plaintiffs and reinforce the status quo.

As AI-generated content becomes ever more ubiquitous—in search results, social media, and beyond—the stakes for accuracy, accountability, and trust have never been higher. The Starbuck case is just the latest, but almost certainly not the last, chapter in this unfolding story.