Today : May 09, 2025
Technology
07 May 2025

Consumer Advocates Challenge Meta Over AI Training Practices

Data protection breaches in Europe rise as AI poses new risks and regulatory challenges

On May 7, 2025, the landscape of data protection in Europe faces increasing scrutiny as consumer advocates take action against major tech companies like Meta for their use of artificial intelligence (AI) in training software. The rise of AI has brought both advancements and challenges in cybersecurity and data privacy, prompting discussions about the balance between innovation and regulation.

According to a recent analysis by heyData, over 130,000 data protection breaches were registered in Europe in 2024, highlighting the persistent issues surrounding data security. The Netherlands reported a staggering 65 percent increase in breaches, totaling 33,471 cases. Spain and Italy also saw significant rises of 47 percent and 42 percent, respectively. In contrast, Germany managed a 13 percent decrease, with 27,829 reported breaches, attributed to improved awareness programs and internal processes.

The role of AI in this context is multifaceted. While AI systems can help identify risks early, they also introduce new dangers. For instance, automated application systems have been implicated in data protection incidents, processing personal data without sufficient consent. Martin Bastius, Chief Legal Officer at heyData, remarked, "AI can be both a shield and a gateway for data protection breaches," underscoring the dual nature of AI's impact on privacy.

In response to these challenges, the European Union is working towards establishing uniform standards for AI systems through the proposed AI Act, which aims to introduce transparency requirements, risk classifications, and measures to ensure compliance with data protection regulations by 2026.

Meanwhile, consumer advocates in Germany are particularly concerned about Meta's plans to train its AI software, Meta AI, using user data from platforms like Instagram and Facebook. The Verbraucherzentrale Nordrhein-Westfalen (VZ NRW) has issued a warning, asserting that this practice violates European data protection laws. They argue that the use of publicly shared contributions from adult users, which includes names, usernames, profile pictures, and interactions with public content, lacks proper legal grounding.

Meta plans to utilize this data to enhance its AI capabilities, stating that such training is common in the industry and essential for developing AI products that better understand local cultures and languages. However, VZ NRW has raised doubts about the legality of this approach, suggesting that users should not have to agree to their personal information being used for AI training without explicit consent.

"The blanket reference to a 'legitimate interest' is insufficient," stated Christine Steffen, a data protection expert at VZ NRW. She emphasized that individuals should not be expected to accept that their long-held personal information could be repurposed for AI training without their active consent. Users are given the option to object to this use of their public information until May 26, 2025, and they can do so without needing to provide a justification.

Meta has responded to the consumer advocates' actions by arguing that an injunction against their AI training would be a significant setback for both consumers seeking relevant local AI technology and for businesses relying on AI models that understand local nuances. They contend that the training process is vital for the advancement of AI technology in Germany.

As the discussion around AI and data protection continues, the implications of these developments are vast. The potential for AI to enhance cybersecurity measures is evident, as AI can improve threat detection, response times, and vulnerability management. However, the risks associated with AI, particularly concerning data privacy, necessitate a careful approach to regulation.

Dr. Christoph Bausewein, Assistant General Counsel for Data Protection and Policy at CrowdStrike, emphasizes that AI's role in cybersecurity is crucial. He notes that AI can help identify subtle signs of cyber threats in large data sets, allowing for proactive measures to be taken before threats evolve into significant attacks. Moreover, AI's ability to streamline responses to threats can drastically reduce reaction times, which is essential in an age where attacks can spread rapidly across networks.

Despite the challenges, the integration of AI into cybersecurity solutions provides companies with a competitive edge in defending against cyber threats. AI-native tools enable continuous monitoring and automated reviews for security vulnerabilities, ensuring that resources are allocated to the most critical issues.

However, the protection of AI systems themselves is also paramount. Organizations must adopt a "Privacy-by-Design" and "Secure-by-Design" approach, ensuring that both security and privacy considerations are integrated into AI development. This includes maintaining the integrity of AI models through carefully curated training data and implementing continuous improvement processes to adapt to new threats.

As AI continues to evolve, the collaboration between human experts and AI systems will be vital. Human input is essential for generating accurate training data and providing feedback that enhances AI performance. This partnership can help ensure that AI remains one step ahead of emerging threats while respecting data privacy regulations.

In summary, the intersection of AI, data protection, and cybersecurity is a complex landscape that requires ongoing dialogue and regulation. As the EU moves towards establishing clearer guidelines for AI systems, the actions taken by consumer advocates against companies like Meta highlight the need for a careful balance between innovation and the protection of individual privacy rights.