June 2025 marks a pivotal moment in the evolving landscape of privacy and data regulation, as new laws and high-profile legal actions underscore the increasing complexity of protecting personal information in the digital age. From Australia’s groundbreaking privacy tort to sweeping updates in U.S. child data protections and antitrust battles targeting tech giants, the world is witnessing a surge of initiatives aimed at balancing innovation, competition, and individual rights.
In Australia, a landmark privacy right of action came into effect on June 13, 2025, allowing individuals to directly pursue legal claims for serious invasions of privacy without relying on regulators. This statutory tort, introduced through the Privacy and Other Legislation Amendment Act 2024, represents the most controversial element of the first tranche of privacy reforms passed late last year. It empowers individuals to bring claims based on broad concepts such as intrusion upon seclusion or misuse of information relating to them, even when the conduct falls outside the scope of the existing Privacy Act.
Legal experts note that this new tort expands the protections available to Australians, aligning the country more closely with jurisdictions like the United Kingdom, Canada, New Zealand, and the United States, where similar privacy causes of action exist. The tort’s flexible application means businesses must rethink their privacy compliance programs, especially as it applies to a wider range of information beyond the Privacy Act’s definition of personal information. For instance, it covers scenarios involving surveillance and misuse of data that might not have been previously regulated.
Defenses and exemptions exist, including for journalists, law enforcement, government actors, and individuals under 18 years old. However, the threshold for “serious” invasion is deliberately high to avoid frivolous claims, considering factors such as the degree of offense, distress, and whether the invasion was motivated by malice. Courts may also weigh the number of individuals affected, meaning large-scale data breaches could fall within its scope.
Employers face new exposures under the tort, especially regarding employee monitoring and surveillance practices, which have been the focus of recent parliamentary inquiries and reports. The Parliament’s January 2025 “The Future of Work” report and the Victorian Parliament’s May 2025 inquiry both call for enhanced employee protections around technology-driven surveillance, signaling further reforms on the horizon.
Meanwhile, the Office of the Australian Information Commissioner (OAIC) is bolstering its enforcement capabilities, supported by an $8.7 million budget allocation over three years. The OAIC aims to proactively address emerging harms through strategic enforcement and benchmark cases, with the Privacy Commissioner authorized to intervene in tort proceedings where significant legal or public interest issues arise.
Across the Pacific, the U.S. Department of Justice (DOJ) grapples with remedies following Judge Amit P. Mehta’s 2024 ruling that Google monopolized the search market. A central proposal is a data access requirement compelling Google to share its extensive user search data with competitors to level the playing field. This data sharing is intended to overcome Google’s scale advantages, which currently create an insurmountable lead in search quality.
However, the privacy safeguards accompanying this remedy have drawn criticism. The DOJ requires Google to remove personally identifiable information before sharing data and mandates regular privacy audits by a technical committee. Yet experts argue these measures fall short in the face of the rapidly evolving AI-driven search landscape, where personalization plays a crucial role. AI services increasingly rely on vast troves of sensitive user data, including political views, medical conditions, and personal preferences, raising the stakes for privacy protection.
Critics highlight that simply removing names and obvious identifiers is insufficient. Historical cases, such as AOL’s 2006 data release, demonstrated how reidentification is possible even with anonymized data. Harvard researcher Latanya Sweeney’s work further shows that a combination of quasi-identifiers like zip code, birth date, and gender can identify the majority of individuals. Therefore, recommendations include implementing reasonable deidentification techniques, banning attempts to reidentify data, and ensuring privacy experts are integral to the technical committee overseeing data sharing.
The DOJ’s current proposal excludes privacy experts from the technical committee, a puzzling omission given its broad privacy responsibilities. Additionally, the remedy allows Google to continue using personalized data internally while sharing only deidentified data with rivals, raising concerns about perpetuating competitive imbalances. Yet requiring Google to deidentify its own data could degrade its service quality and potentially drive users to competitors, complicating the balance between competition and consumer experience.
On the regulatory front, Europe’s Digital Markets Act (DMA) similarly mandates data sharing but demands full anonymization, which critics argue could render data unusable for developing competitive search engines. The U.S. approach seeks a middle ground with reasonable deidentification to preserve data utility while protecting privacy.
In the realm of child data privacy, the U.S. Federal Trade Commission (FTC) finalized significant updates to the Children’s Online Privacy Protection Act (COPPA), effective June 23, 2025. These amendments reflect the dramatic evolution of technology since COPPA’s last update in 2013, imposing stricter requirements on websites and online services collecting data from children under 13.
The new rules broaden the definition of protected personal information to include biometric identifiers such as fingerprints, retina patterns, voiceprints, facial templates, and genetic data. Notably, operators must obtain separate parental consent before sharing children’s data with third parties, with clear notification of who will receive the information and what categories of data are shared.
Data retention policies are tightened, limiting the duration operators can hold children’s data to what is reasonably necessary. Operators must also implement written information security programs, designate responsible employees, conduct annual risk assessments, and maintain transparent data retention policies. The rules recognize “mixed audience” websites that aren’t primarily for children but attract them, allowing limited data collection without parental consent for specific purposes such as responding to safety concerns.
Organizations have until April 22, 2026, to comply, with civil penalties reaching up to $53,088 per violation. Legal experts advise companies to update their data collection, retention, and security practices and to utilize new methods for verifiable parental consent, including credit card verification and knowledge-based authentication.
Meanwhile, the advertising technology sector faces mounting regulatory pressure worldwide. In the UK, the Information Commissioner’s Office (ICO) intensified enforcement efforts in January 2025, targeting the top 1,000 sites and ad tech platforms. Despite previous warnings, 53 of the top 100 UK sites were found non-compliant, prompting threats of fines and increased scrutiny.
Major players have faced hefty penalties recently: Meta settled a case in the UK over personalized ads and was fined in the EU for inadequate user consent mechanisms; Google agreed to pay $1.4 billion to Texas over privacy violations involving location tracking and facial recognition data; TikTok was fined €530 million for data protection failures; and LinkedIn faced lawsuits alleging unauthorized disclosure of private messages.
Industry experts emphasize that privacy is no longer just about compliance but about building trust and sustainable relationships with consumers. As AI, machine learning, and graph-based technologies reshape ad tech, transparent and privacy-first data practices are essential. Tools that assess real-time compliance and curate publisher lists help mitigate legal risks, especially as brand safety becomes tightly linked to data governance.
With data-driven advertising underpinning much of the internet’s content and services, the sector must navigate the tension between innovation and privacy protection carefully. Regulators worldwide are signaling that non-compliance will carry significant consequences, urging advertisers, platforms, and agencies to adopt ethical data practices and prioritize consumer privacy.
Altogether, these developments illustrate a global shift toward stronger privacy protections amid rapid technological change. Whether through new legal rights in Australia, antitrust remedies in the U.S., updated child privacy rules, or intensified ad tech enforcement, the message is clear: safeguarding personal data has become a central challenge—and priority—in the digital era.