On October 10, 2025, a significant move unfolded in California’s ongoing debate over digital privacy and the reach of artificial intelligence-powered surveillance. UC Law San Francisco’s Center for Constitutional Democracy, in partnership with the ACLU of Northern California and the Electronic Frontier Foundation, submitted an amicus brief in the closely watched case of Mata v. Digital Recognition Network. The brief, also crafted with input from Stanford Law School’s Juelsgaard Intellectual Property and Innovation Clinic, takes aim at what its authors see as a misinterpretation of a key California privacy law—a law designed to shield citizens from the ever-expanding gaze of modern surveillance technology.
At the heart of the legal battle is a deceptively simple but crucial question: When can Californians take action to protect their privacy? According to the amicus brief, the answer should be "right away"—not only after concrete harms have already occurred. The brief argues that the trial court, in dismissing Mata’s case, set the bar far too high by requiring proof of extra harm beyond the invasion of privacy itself. As the brief puts it, "California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms."
Nicole Ozer, Executive Director of the Center for Constitutional Democracy and a nationally recognized authority on the intersection of rights, technology, and democracy, helped lead the charge. Ozer and her coauthors warn that the stakes in this case are enormous, not just for Mata, but for anyone living under the shadow of new surveillance systems. "This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety," Ozer wrote in the brief.
The backdrop for this legal drama is the rapid proliferation of artificial intelligence-powered surveillance systems, especially automated license plate readers (ALPRs). These systems, now a fixture in many American communities, "incessantly and indiscriminately" capture the locations of vehicles and other information about drivers, according to the brief. The data, once gathered, doesn’t just vanish. Instead, government and non-governmental actors across California—and increasingly, other states—"routinely amass and retain records of people’s movements for months or years."
For many Californians, the implications of this kind of surveillance are only beginning to sink in. While the average person’s most visible brush with privacy protections might be the seemingly endless stream of website cookie pop-up notices, the real battleground is far more consequential. According to an analysis published by The Washington Post on October 10, 2025, these pop-ups are often little more than a nuisance, providing an illusion of control while doing little to actually shield personal data. "It can feel discouraging to care about your personal data flowing everywhere like floodwater," the article observes, capturing a growing sense of helplessness among consumers.
However, the landscape is shifting. The same Washington Post analysis notes that new privacy protections are emerging, giving individuals more effective tools to safeguard their data online. This broader context is critical: as digital technologies become ever more embedded in daily life, the demand for robust privacy protections has never been higher. The Mata v. Digital Recognition Network case, with its focus on AI-driven surveillance and the right to challenge it early, is a bellwether for how these protections might evolve.
So, what exactly is at stake in Mata’s lawsuit? The plaintiff is challenging the use of ALPRs by Digital Recognition Network, a company that collects and processes vast amounts of location data. The trial court dismissed the suit, ruling that Mata had not shown harm beyond the mere collection of personal data. But, as the amicus brief argues, this sets a dangerous precedent. If individuals must wait until their information is misused or they suffer tangible consequences, the very purpose of privacy laws—to prevent harm before it occurs—could be undermined.
Ozer and her colleagues insist that privacy harm is real harm. The brief states unequivocally, "protection against unfettered information collection has taken on new importance today, as unblinking artificial intelligence-powered surveillance systems such as automated license plate readers proliferate in American communities." In their view, the right to privacy is not just a theoretical safeguard but a practical shield against the risks posed by modern surveillance.
Legal scholars and privacy advocates are watching the case closely. The involvement of the ACLU of Northern California and the Electronic Frontier Foundation underscores the broader civil liberties concerns at play. Both organizations have long championed digital rights, arguing that unchecked surveillance can chill free expression, erode trust in public institutions, and disproportionately impact marginalized communities.
The brief’s argument draws strength from California’s unique legal landscape. The state’s Constitution and statutory framework are among the most protective of privacy rights in the United States. Unlike federal law, which often requires plaintiffs to show concrete injury, California’s laws have historically recognized the invasion of privacy itself as a sufficient basis for legal action. This principle, the brief contends, is especially vital in an era when surveillance technologies are both pervasive and largely invisible to those affected.
But the story doesn’t end in the courtroom. The public’s growing unease about data collection is reflected in everyday frustrations—like those ubiquitous cookie pop-ups that, as The Washington Post notes, seem "pointless" and fail to provide meaningful protection. While these notifications are the most visible face of privacy regulation for many, they are only the tip of the iceberg. The real work of protecting personal data, advocates argue, happens in cases like Mata’s, where the law is tested against the realities of 21st-century technology.
As Nicole Ozer and her coauthors make clear, the outcome of Mata v. Digital Recognition Network could have ripple effects far beyond California. If the courts affirm that privacy laws allow individuals to challenge surveillance before additional harms occur, it could set a powerful precedent for other states grappling with similar issues. Conversely, a ruling that weakens these protections could embolden companies and government agencies to expand surveillance with little fear of accountability.
For now, the case stands as a flashpoint in the ongoing struggle to balance technological innovation with fundamental rights. With artificial intelligence and data-driven systems reshaping the boundaries of privacy, the question is no longer whether surveillance will touch our lives—it’s how, and whether the law will stand as a meaningful barrier to its excesses.
The next chapter in this legal and social saga will be written not just by the courts, but by the millions of people whose daily routines are quietly tracked, stored, and analyzed. As the debate continues, the message from California’s privacy advocates is clear: the time to defend privacy is now, before the harm is done.