On September 5, 2025, Anthropic, one of the fastest-growing artificial intelligence companies in the world, agreed to pay at least $1.5 billion plus interest to settle a class-action copyright lawsuit brought by authors whose works were used to train the company’s AI models. The deal, which awaits judicial approval, is poised to become the largest publicly reported copyright recovery in history and marks a watershed moment in the ongoing legal battles between tech giants and creative professionals over the boundaries of intellectual property in the AI era.
The lawsuit, initially filed in federal court in California in 2024 by three authors—thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson—centered on Anthropic’s use of approximately 500,000 published works. According to court documents cited by Bloomberg and NPR, these books were obtained from so-called “shadow libraries” such as Library Genesis and Pirate Library Mirror, notorious for distributing pirated content. The authors alleged that Anthropic had “committed large-scale copyright infringement” by downloading and commercially exploiting their books to train its chatbot, Claude.
“If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment,” said Justin Nelson, a lawyer for the authors, in a statement quoted by The New York Times. Nelson emphasized the significance of the $3,000-per-work payout, noting, “This result is nothing short of remarkable.”
The settlement, which includes four payments starting with a $300 million initial payout due within five business days of court approval, could be even larger if the number of affected works exceeds 500,000. “If the Works List ultimately exceeds 500,000 works, then Anthropic will pay an additional $3,000 per work that Anthropic adds to the Works List above 500,000 works,” Nelson explained in a memorandum to the judge.
Judge William Alsup of the U.S. District Court for the Northern District of California, who presided over the case, issued a pivotal ruling in June 2025. He found that while training AI systems on copyrighted books could be considered “fair use” if the materials were legally acquired and the use was transformative, downloading pirated copies was not protected under fair use. As Alsup wrote, “The training use was a fair use. The technology at issue was among the most transformative many of us will see in our lifetimes.” Yet, he made clear that Anthropic’s executives “knew they contained pirated books” when they downloaded millions of titles from Library Genesis and Pirate Library Mirror.
The ruling was a double-edged sword for both sides. On one hand, it provided a measure of legal clarity for AI companies seeking to train their models on copyrighted content; on the other, it underscored the risk and liability attached to acquiring works through illicit means. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” Nelson said, as reported by The New York Times.
Anthropic, for its part, maintained that it did not use any pirated works to build AI technologies that were publicly released. Aparna Sridhar, the company’s deputy general counsel, commented, “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic’s approach to training AI models constitutes fair use. Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”
As part of the settlement, Anthropic agreed to destroy the original pirated book files it downloaded—a move that was welcomed by authors’ advocates. Mary Rasenberger, CEO of the Authors Guild, called the deal “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
The financial scale of the settlement is staggering, but it is relatively small compared to Anthropic’s meteoric growth. The company, founded by former OpenAI leaders in 2021, was valued at $183 billion as of early September 2025 after raising $13 billion in new investments. Despite expecting $5 billion in sales this year, Anthropic—like many of its AI peers—has not yet reported profits, relying on investor backing to fund the high costs of AI development.
Legal experts have drawn parallels between this settlement and the early 2000s file-sharing cases involving Napster and Grokster, which fundamentally reshaped the music and entertainment industries. “This is the AI industry’s Napster moment,” said Cecilia Ziniti, an intellectual-property lawyer and CEO of startup GC AI, in comments to The New York Times. Chad Hummel, a trial lawyer with McKool Smith, observed, “This will cause generative AI companies to sit up and take notice.”
The Anthropic case is just one of more than 40 lawsuits currently pending against AI companies over copyright issues. High-profile authors like John Grisham, George R.R. Martin, and Jodi Picoult are among nearly 20 bestselling writers suing OpenAI, alleging “systematic theft on a mass scale” for using their works to train ChatGPT. Meanwhile, The New York Times has sued OpenAI and Microsoft over alleged copyright infringement related to AI-generated news content.
Some AI companies have opted to secure licensing agreements with publishers and news organizations. OpenAI, for example, has inked deals with Axel Springer, Condé Nast, News Corp, and The Washington Post. Amazon signed a licensing agreement with The New York Times in May. These moves suggest a possible path forward for reconciling the needs of AI developers and copyright holders, though the legal landscape remains fluid.
Anthropic’s settlement does not set a binding legal precedent, since the case was resolved before trial. However, it is widely expected to influence the outcomes of other disputes and encourage both sides to seek negotiated solutions rather than risk the unpredictability of a courtroom battle. As legal analyst William Long noted, “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court.”
For now, the message from the courts and the marketplace is clear: while AI development may be transformative, acquiring copyrighted material through piracy is a costly gamble. The Anthropic case has drawn a line in the sand—one that both the tech industry and creative communities will be watching closely in the months and years ahead.