In a sweeping set of developments across North America and Europe, privacy and cybersecurity law is undergoing rapid and significant change, with Canada, the United States, and the European Union all rolling out new policies, penalties, and guidance that affect businesses, institutions, and individuals alike. The past month alone has seen landmark enforcement actions, legislative advances, and strategic pivots that signal a new era for digital governance and data rights.
Canada, in particular, is stepping into the spotlight. According to MLex, the Canadian federal government is poised to launch a 30-day sprint starting October 1, 2025, aimed at developing a national artificial intelligence (AI) strategy. This initiative is designed to assert Canada’s digital sovereignty and regulatory autonomy, distinct from the approaches of both the United States and the European Union. The plan includes a parallel overhaul of Canada’s 25-year-old privacy law, reflecting the urgency to modernize the country’s legal framework in light of emerging technologies and global data flows.
Meanwhile, at the provincial level, the Office of the Information and Privacy Commissioner of Ontario has made history. On September 29, 2025, the commissioner imposed the first-ever administrative monetary penalties (AMPs) under Ontario’s Personal Health Information Protection Act (PHIPA)—and, in fact, the first such penalties ever issued by a privacy commissioner anywhere in Canada. A physician was fined $5,000 for improperly accessing patient records via a hospital electronic health record system and using this information to solicit parents of newborn males for circumcision services. The physician’s private clinic was also fined $7,500 for failing to meet its privacy obligations. These penalties, as reported by Fasken, mark a new era of accountability for privacy breaches in the Canadian health sector.
Ontario’s privacy commissioner didn’t stop there. In complaint report PX24-00001, the commissioner found that a university had violated the province’s public sector privacy law (FIPPA) by installing “smart” vending machines equipped with face detection technology—without notifying users or obtaining their consent. Despite some contractual safeguards with the technology vendor, the university’s procurement process was found lacking, particularly in its failure to conduct a privacy impact assessment or require full disclosure from the vendor. The commissioner recommended a thorough review of the university’s privacy policies and procurement practices to ensure compliance with FIPPA in future technology deployments.
At the federal level, the Office of the Privacy Commissioner of Canada issued new guidance on August 11, 2025, for both private sector businesses and federal institutions on the processing of biometric information. The guidance, as detailed by Fasken, is anchored in the principles of the Personal Information Protection and Electronic Documents Act (PIPEDA) and the federal Privacy Act. It stresses that biometric data—such as fingerprints, facial recognition, and behavioral biometrics—is inherently sensitive, as it can uniquely identify individuals, is difficult to change, and is closely linked to personal identity. The guidance outlines best practices for identifying appropriate purposes, obtaining consent, and minimizing the collection, use, and retention of such data.
Europe, too, has seen pivotal legal developments in recent weeks. On September 4, 2025, the European Court of Justice clarified the scope of pseudonymized data in decision C-413/23 P. The Court found that pseudonymization, when effectively implemented, can prevent processors from identifying data subjects—meaning such data may not always be considered personal data, depending on the circumstances. For controllers who possess the key to re-identify data, the information remains personal. However, for processors lacking this capability due to robust technical and organizational measures, the data may not be personal in nature. This nuanced interpretation has significant implications for data transfers and privacy compliance across the continent.
Just a day later, on September 5, 2025, the European Commission determined that Brazil provides an adequate level of data protection comparable to that of the EU. Once formally adopted, this decision will enable free data flows between the EU and Brazil, benefiting businesses, public authorities, and research projects. The Brazilian authorities are expected to reciprocate, allowing for the seamless movement of data both ways without the need for additional risk assessments or contractual clauses.
In addition, the European Data Protection Board (EDPB) adopted guidelines during its September plenary meeting to clarify how the General Data Protection Regulation (GDPR) interacts with the new Digital Services Act (DSA). The DSA aims to create a safer online environment and protect fundamental rights, especially for minors, by imposing obligations on online platforms and search engines. The EDPB’s guidelines, now open for public consultation, are intended to help organizations navigate overlapping requirements and ensure robust data protection in the digital space.
To further support organizations, the European Commission also published a set of Frequently Asked Questions (FAQs) on the Data Act. This legislation establishes horizontal rules for data access and use, focusing on fairness and innovation in the data economy while safeguarding fundamental rights. The FAQs are designed to help businesses implement the Act and understand its practical implications.
Shifting focus to the United States, state and federal authorities are also tightening the reins on digital technologies. In September 2025, the California legislature approved Senate Bill 243, which introduces new safeguards for AI-powered “companion” chatbots. These chatbots, designed to provide adaptive, human-like interactions, will now be subject to strict requirements aimed at protecting children and other vulnerable users. The bill prohibits exposing minors to sexual content, suicidal ideation, and other harmful material, and mandates recurring reminders that users are interacting with AI, not a real person. It also establishes annual reporting and transparency requirements, and empowers individuals to sue companies for violations. If signed into law, California will become the first state to require such safety protocols for AI chatbot operators, with core provisions effective January 1, 2026, and additional reporting obligations starting July 1, 2027.
On the federal front, the U.S. Federal Trade Commission (FTC) has launched an inquiry into the safety of AI chatbots acting as companions. The FTC has issued orders to seven companies, seeking information on how they assess and mitigate potential harms to children and teens. The agency wants to know what steps companies have taken to evaluate safety, limit use by minors, and inform users and parents about associated risks.
Back in Canada, the legal community is preparing for these shifts. Fasken, a leading privacy and cybersecurity law firm, hosted a Symposium on Privacy at its Montreal office on September 16, 2025, offering training and insights on compliance with new laws such as Quebec’s Law 25. The firm announced that 11 of its lawyers were recognized in the 2026 edition of Best Lawyers in Canada for Privacy and Data Security Law, and that it was ranked Band 2 in the Chambers Canada 2026 Guide for Privacy and Data Protection.
As governments and regulators worldwide race to keep pace with technological change, the message is clear: the era of lax digital oversight is ending. New penalties, updated laws, and cross-border agreements are reshaping the privacy and cybersecurity landscape, forcing organizations to rethink their practices and embrace a culture of compliance—or risk being left behind.