On September 10, 2025, a federal courtroom in California became the latest battleground in the escalating war between artificial intelligence companies and creative professionals, as Judge William Alsup paused the approval of a record-setting $1.5 billion settlement between AI startup Anthropic and a group of authors who had accused the company of copyright infringement. The case, which has drawn global attention and set legal precedents, highlights the growing pains of an industry racing ahead of existing copyright law—and the uncertainty that still hangs over the fate of writers whose work has powered the AI revolution.
Anthropic, founded in 2021 and now valued at $183 billion after a recent $13 billion funding round, found itself in the crosshairs in 2024 when novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson filed a class-action lawsuit. Their claim: Anthropic had allegedly pirated over seven million books from two online "shadow libraries" in June 2021 and July 2022, using the material to train its chatbots without permission or compensation. According to The Conversation, the case quickly became a rallying point for authors and publishers who have seen their livelihoods threatened by the unchecked expansion of AI.
The proposed settlement, which would pay authors about $3,000 for each of the estimated 500,000 books included in the agreement, was hailed by some as a historic victory. If approved, it would be the largest copyright settlement in US history, requiring Anthropic to destroy illegally downloaded books and commit to never again using pirated works in its training processes. Mary Rasenberger, CEO of the Authors Guild, called it “an excellent result for authors, publishers, and rightsholders.” She added, “The settlement will lead to more licensing that gives authors both compensation and control over the use of their work by AI companies, as should be the case in a functioning free market society.”
But the celebrations were short-lived. Judge Alsup, whose original ruling in June 2025 had already made waves by drawing a distinction between legal AI training and illegal acquisition of copyrighted content, expressed deep reservations about the deal. “I have an uneasy feeling about hangers on with all this money on the table,” he remarked, according to Bloomberg Law and the Associated Press. He went further, stating, “We’ll see if I can hold my nose and approve it.” Alsup’s skepticism centers on whether authors might be pressured into accepting the deal and whether the proposed compensation is adequate given the scale of the alleged infringement.
His concerns are not without merit. While $3,000 per book might sound substantial, US copyright law allows for statutory damages of up to $150,000 per infringed work in cases of willful infringement. If the settlement is rejected and the case goes to trial in December, as currently scheduled, Anthropic could face damages in the multiple billions—a sum that legal analyst William Long of Wolters Kluwer warns could “jeopardize or even bankrupt the company.”
The stakes are high not just for Anthropic, but for the entire tech industry. As The Conversation points out, the outcome will have implications for ongoing copyright cases against OpenAI, Microsoft, Google, and Apple. The June 2025 ruling by Judge Alsup, which separated legal AI training from illegal content acquisition, has already shaped the strategies of both plaintiffs and defendants in these cases. Meanwhile, Meta recently prevailed in a separate copyright case, though the ruling left the door open for future litigation.
Yet, the complexity of the Anthropic settlement runs deeper than the headline numbers. The agreement stipulates that only authors whose publishers have registered their work with the US Copyright Office within a certain timeframe will be eligible for compensation. This has sparked concern among international authors, especially in Australia, where many writers may be excluded from the payout. Lucy Hayward, CEO of the Australian Society of Authors, voiced her worries: “While all of the details are yet to be revealed, this settlement could represent a very welcome acknowledgement that AI companies cannot just steal authors’ and artists’ work to build their large language models. However, we suspect many international authors may miss out on settlement money.”
Australian policymakers have taken note. Stuart Glover, head of policy at the Australian Publishers Association, welcomed the court-enforced accountability but emphasized the need for ongoing compensation. “This settlement shows why AI companies must respect copyright and pay creators—not just see what they can get away with,” he said. Glover called on the Australian government to require tech companies to pay for the use of Australian books in AI training, echoing a broader international debate about how to balance innovation with the rights of creators.
For its part, the Australian government has shown little appetite for relaxing copyright protections. As Arts Minister Tony Burke stated in August 2025, there are “no plans, no intention, no appetite to be weakening” the country’s copyright laws. This stance stands in contrast to lobbying efforts by major AI players—including Google and Microsoft—who have pushed for copyright exemptions to facilitate AI development. The recent Productivity Commission interim report in Australia proposed a text and data mining exception to the Copyright Act, which would allow AI training on copyrighted Australian work. The proposal, however, has faced strong opposition from the Australian Society of Authors and the publishing industry.
The Australian Publishers Association, while not opposed to AI outright, has advocated for a balanced approach that prioritizes ethical frameworks, transparency, and protections for creators. Their goal: to ensure that both AI development and cultural industries can flourish side by side. As the industry searches for equilibrium, the Anthropic case has become a touchstone for what’s at stake and what’s possible.
Meanwhile, the practicalities of the settlement remain in flux. The plaintiffs and Anthropic are expected to finalize the list of works to be compensated by September 15, and Judge Alsup has scheduled another hearing for September 25 to address his concerns. Music rightsholders—many of whom are also suing Anthropic over its training practices—are watching the proceedings closely, aware that the outcome could set a precedent for their own claims.
Keith Kupferschmid, president and CEO of the US-based Copyright Alliance, argues that the sheer scale of Anthropic’s recent funding demonstrates that “AI companies can afford to compensate copyright owners for their works without it undermining their ability to continue to innovate and compete.” Whether the courts will agree remains to be seen, but one thing is clear: the battle over who owns the building blocks of artificial intelligence is far from over.
As September draws to a close, all eyes are on Judge Alsup’s next move. The fate of thousands of authors—and perhaps the future relationship between AI and creative industries—may hinge on whether he can, as he put it, “hold my nose and approve it.”