In a closely watched legal battle that could shape the future of artificial intelligence and copyright law, the UK High Court has largely sided with Stability AI, the London-based developer behind the popular image-generating model Stable Diffusion, in a lawsuit brought by global photo giant Getty Images. The decision, delivered on November 4, 2025, marks a pivotal moment in the ongoing struggle between technology companies and creative industries over the use of copyrighted materials to train AI systems.
Getty Images, headquartered in Seattle, had accused Stability AI of scraping an estimated 12 million images from its vast online library without permission. The goal, Getty alleged, was to use these images to train Stable Diffusion, a model that can generate new images based on text prompts. This, Getty argued, amounted to both copyright and trademark infringement on a massive scale—a claim that drew the attention of artists, authors, and tech executives worldwide.
But as the dust settled in the London courtroom, Justice Joanna Smith’s ruling offered a mixed but mostly favorable outcome for Stability AI. While the judge found that Stability AI had infringed Getty’s trademark in some instances—specifically when its AI-generated images reproduced recognizable Getty watermarks—she dismissed the core copyright infringement claims. According to the court, Stable Diffusion "does not store or reproduce any Copyright Works (and has never done so)," and thus did not breach UK copyright law. This nuanced distinction, as reported by Associated Press and Reuters, leaves the legality of AI training on copyrighted data in something of a gray area.
The case had already narrowed by the time the verdict was delivered. During the three-week trial in June, Getty Images dropped its primary copyright infringement claims after acknowledging that Stable Diffusion’s training had occurred outside the UK—on servers operated by U.S. tech giant Amazon, according to court documents. Getty instead pursued secondary infringement claims, arguing that making Stable Diffusion available to UK users was akin to importing unlawful copies of its images. The court, however, was not convinced. Justice Smith wrote, "While it is true that the model weights are altered during training by exposure to Copyright Works, by the end of that process the Model itself does not store any of those Copyright Works; the model weights are not themselves an infringing copy and they do not store an infringing copy."
On the trademark front, Getty found some vindication. The court agreed that when Stable Diffusion produced images bearing Getty’s watermark—sometimes even the logo of its iStock subsidiary—it constituted trademark infringement. Importantly, the judge rejected Stability AI’s argument that the responsibility should fall on users, not the model provider. "The Court rejected Stability AI’s attempt to hold the user responsible for that infringement, confirming that responsibility for the presence of such trademarks lies with the model provider, who has control over the images used to train the model," Getty said in a statement released after the ruling. "This is a significant win for intellectual property owners."
Still, the judge was careful to note the limited scope of her findings. "While I have found instances of trademark infringement, I have been unable to determine that these were widespread," Justice Smith wrote, calling her conclusions "both historic and extremely limited in scope." Legal experts echoed this sentiment. Iain Connor, an intellectual property partner at law firm Michelmores, told Reuters that the withdrawal of Getty’s main copyright claims "leaves the UK without a meaningful verdict on the lawfulness of an AI model’s process of learning from copyright materials." Rebecca Newman, a legal director at Addleshaw Goddard, told The Guardian that the judgment reveals the UK’s secondary copyright regime isn’t robust enough to protect creators in the age of AI.
For Stability AI, the outcome was a welcome relief. "We are pleased with the court’s ruling on the remaining claims in this case," said Christian Dowell, the company’s general counsel. "This final ruling ultimately resolves the copyright concerns that were the core issue." Stability AI, which has raised more than $170 million in funding and is known for its open-source approach to AI models, had argued that the case didn’t belong in the UK since the actual model training took place abroad.
The ruling’s impact was felt on both sides of the Atlantic. Getty Images’ shares dipped 3% before the U.S. market opened on the day of the decision and closed down 9%, reflecting investor uncertainty over the implications for the company’s business model. Meanwhile, Getty made clear it would use findings from the UK case in ongoing litigation in the United States, where it refiled a copyright lawsuit against Stability AI in a San Francisco federal court in August 2025.
Beyond the immediate parties, the decision is just one chapter in a much broader saga. The explosion of generative AI has triggered a wave of lawsuits—over 50 and counting—pitting tech firms against artists, writers, and media companies. In recent months, Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit by authors, while a federal judge dismissed a similar case brought by 13 authors against Meta Platforms. Hollywood studios aren’t sitting idle either: Warner Bros., Disney, and Universal have filed lawsuits against AI companies, claiming their image generators create unauthorized copies of copyrighted characters.
Getty, in its post-ruling statement, sounded a note of caution for the creative community. "We remain deeply concerned that even well-resourced companies such as Getty Images face significant challenges in protecting their creative works given the lack of transparency requirements," the company said. Getty urged governments, including the UK, to "establish stronger transparency rules which are essential to prevent costly legal battles and to allow creators to protect their rights." The company also emphasized that the ruling established "a powerful precedent that intangible articles, such as AI models, are subject to copyright infringement claims in the same way as tangible articles."
Despite the outcome, many legal experts noted the unresolved questions. Does using copyrighted material to train AI models constitute infringement? If so, under what circumstances? And who, ultimately, bears responsibility—the developers, the users, or both? Justice Smith herself acknowledged the "very real societal importance" of these issues, but said the court could only rule on the case as presented. "The court can only rule on the ‘diminished’ case that remained and couldn’t consider ‘issues that have been abandoned,’" she wrote.
As the AI industry continues to expand and creative industries push for stronger protections, the stakes are only getting higher. The UK ruling may have provided some clarity, but it leaves the door wide open for future legal battles—on both sides of the Atlantic and beyond. For now, one thing is certain: the line between innovation and infringement remains as blurry as ever.