Meta Platforms announced on Monday, April 14, 2025, that it will utilize user interactions with its artificial intelligence (AI), along with public posts and comments shared by adults on its platforms, to train its AI models in the European Union (EU). This significant move follows the introduction of Meta's AI technology in Europe last month, March 2025. Originally slated for a June 2024 launch, the rollout was postponed due to regulatory concerns surrounding data protection and privacy.
While Meta AI made its debut in the United States in 2023, the launch in Europe faced numerous challenges, primarily due to the EU's stringent data protection and transparency regulations. As part of this new initiative, Meta stated that users in the EU who engage with its platforms, including Facebook, Instagram, and WhatsApp, will promptly receive notifications detailing what types of data the company plans to utilize. Furthermore, users will be given a link to a form that allows them to object to the use of their data for training purposes.
Meta clarified that it will use data such as user requests and inquiries regarding Meta AI, but will not employ private messages or public data from accounts belonging to users under 18 for training. This move comes after the company decided in June 2024 to pause the introduction of its AI models in Europe, responding to a request from the Irish Data Protection Commission (DPC) that urged the postponement of its plan to leverage data from social media posts. The company also faced criticism from the NOYB interest group, which called on national data protection authorities to prevent such use of social media content.
In a broader context, Meta's actions align with the EU AI Regulation (2024/1689), commonly referred to as the AI Act, which establishes a legal framework for artificial intelligence. This regulation adopts a risk-based approach, meaning that not all AI systems are subject to the same rules. For instance, widely used AI systems like ChatGPT or Claude initially face less stringent requirements than those classified as high-risk.
Companies deploying low-risk AI systems must ensure a certain level of AI competence and fulfill transparency obligations as specified in the regulation. However, if an AI system is utilized in critical sectors, such as human resources, it may be reclassified as high-risk, leading to significantly increased obligations.
Under the AI Act, providers of AI systems are required to furnish a contract for order processing (AVV). Additionally, a data protection impact assessment (DSFA) may be necessary, taking into account AI-specific risks to personal data, including integrity, confidentiality, fairness, autonomy, control, and transparency.
Another critical aspect of the AI landscape in Europe is the intersection of data privacy and AI. The models underlying AI systems are not designed for information to be deleted post-processing, and they often lack transparency for the average user. This creates significant challenges for enforcing the rights of affected individuals. While it is technically possible to operate an AI system without personal data, doing so is complex and quickly reaches its limits.
Despite these challenges, the processing of personal data in AI systems is feasible, provided that the data is not used for training the AI model itself or for any other unrelated corporate purposes. Formally, AI system providers must first provide a contract for order processing, and they may also need to conduct a data protection impact assessment.
On the copyright front, under German copyright law, AI systems cannot be recognized as authors due to the absence of a human component in the creative process. This situation can become problematic when users upload copyrighted material into the systems, which may then be utilized to train the models. Providers are advised to contractually require users not to use copyrighted inputs for training the AI model. Caution is also warranted regarding the commercial use of generated results, as AI outputs may closely resemble other copyrighted works, potentially infringing on rights.
In the context of business operations, IT contracts are not uncommon. When purchasing AI systems, it is crucial to ensure that the generated results can be used for commercial purposes, such as in marketing. The aforementioned risks can be partially mitigated through contractual liability provisions. However, this requires AI providers to agree to individual liability arrangements, and it is essential to find clauses that withstand scrutiny under general terms and conditions.
When AI systems are employed to deliver services to customers, these considerations should be incorporated into contract drafting. In particular, if AI-generated outputs lead to subpar performance, the company may initially bear full liability. For instance, this could apply to AI-generated code that results in security vulnerabilities. If a customer wishes to leverage AI systems for efficiency in time and cost, this should be explicitly stated in the agreement, with the resulting liability risk appropriately shared.
In conclusion, while AI can be effectively utilized within organizations to enhance and streamline daily operations, legal considerations must not be overlooked. With proper preparation, companies can navigate the relevant pitfalls. As a primary approach to risk mitigation, it is advisable for every company to establish an AI policy and ensure that employees are adequately informed.