The European Union's regulation on artificial intelligence (AI) is set to comprehensively govern the use of AI technologies, but it raises significant questions regarding its compatibility with existing data protection laws, particularly the General Data Protection Regulation (GDPR). In a recent episode of the c't Datenschutz Podcast, experts including Prof. Dr. Rolf Schwartmann, the chairman of the Society for Data Protection and Data Security (GDD), discussed the complexities arising from the intersection of these two regulatory frameworks.
According to Schwartmann, the GDPR and the AI regulation serve different purposes. The GDPR lays down clear guidelines for the handling of personal data, while the AI regulation acts as a product regulation, outlining the conditions under which AI providers and users can operate. Both regulations aim to protect fundamental rights, but they adopt fundamentally different approaches.
One of the central issues highlighted during the podcast is the interaction between these laws and generative AI systems, such as ChatGPT. These AI models are inherently flexible and not bound to specific purposes, which poses a challenge to the GDPR's principle of purpose limitation. Schwartmann warned that this could lead to widespread misuse of AI technologies without legal repercussions. He emphasized that users of generative AI might be held liable for damages resulting from incorrect outputs.
Furthermore, the podcast addressed the issue of automated decision-making. The GDPR prohibits automated decisions that could have significant legal consequences or substantial impacts on individuals. In contrast, the AI regulation permits such systems under strict conditions, albeit with extensive compliance obligations. This discrepancy raises concerns about the potential for conflicting interpretations and applications of the two regulations.
Heise's legal expert Joerg Heidrich criticized the legislative approach, suggesting that it missed opportunities to address fundamental data protection conflicts arising from modern AI systems. He pointed out that the AI regulation offers few exceptions regarding data protection, particularly in cases involving sensitive data used to correct biases in AI models or in controlled testing environments known as real laboratories.
In a related development, Marit Hansen, the State Data Protection Commissioner for Schleswig-Holstein, reported a rise in data breaches in her region. In 2024, there were 602 reports of violations of personal data protection, an increase from 527 the previous year. The number of complaints surged by 278 to a total of 1,628, with nearly one-fifth of these complaints concerning video surveillance.
Hansen noted that the rise in data breaches is not limited to minor incidents such as misdirected mail or emails, which can still have significant impacts on affected individuals. Some cases involve substantial violations, including large-scale manipulation of invoices and attacks on IT systems containing customer data. The increase in complaints reflects a growing concern over data privacy and security.
Additionally, Hansen expressed concerns about the transparency and reliability of AI systems. She stated, "AI systems are mostly intransparent and make mistakes," highlighting that many of these technologies are developed using personal data or are deployed in scenarios involving personal information. With the European Union continuing to evolve its data laws, including new legislation governing AI and cybersecurity, Hansen emphasized the need for increased guidance on the intersection of AI and data protection.
The discussion around data protection also included insights from Germany's new Federal Commissioner for Data Protection and Freedom of Information, Prof. Dr. Louisa Specht-Riemenschneider, who took office on September 3, 2024. In her role, she has expressed concerns about the implications of AI for democratic societies, citing a real danger posed by the misuse of personal data and the potential for erosion of privacy rights.
As the EU moves forward with its AI regulation, the interplay between this new framework and the GDPR continues to be a topic of intense debate among legal experts, policymakers, and data protection advocates. The challenge lies in creating a cohesive regulatory environment that safeguards individual rights while promoting innovation in AI technologies.
In conclusion, the ongoing discussions underscore the urgent need for clarity and alignment between the EU's AI regulation and the GDPR. As new technologies emerge, ensuring that data protection principles are upheld will be crucial in maintaining public trust and protecting individual rights in an increasingly digital world.