Today : Apr 20, 2025
Technology
15 April 2025

Meta Enhances Privacy Features While Training AI Models

The tech giant tests new WhatsApp features and prepares to use public data from European users for AI training.

On April 15, 2025, Meta Platforms, Inc., the parent company of Facebook and Instagram, announced significant updates to its services, focusing on user privacy and artificial intelligence. The tech giant is currently testing a new feature on WhatsApp that enhances privacy for users, while also preparing to utilize publicly available data from European users to train its AI models.

The latest feature being tested on WhatsApp aims to prevent users from taking screenshots of photos and videos sent via the app. Known as the "protected screenshot" feature, it is designed to bolster privacy by ensuring that disappearing media cannot be captured for future viewing. This function follows the previous launch of the "view once" feature, which allows users to send photos that can only be viewed once before disappearing. Users must select the "view once" option, marked by a circular icon with the number 1, to activate this functionality.

Currently, this new privacy feature is being trialed in the beta version of WhatsApp for iOS users through TestFlight. Although it is not yet available to the general public, Meta has indicated that it will roll out the feature to all users gradually once testing concludes.

In a related development, Meta is also gearing up to train its AI models using publicly available data from users in the European Union. This decision comes after delays caused by stringent data privacy laws in the EU, which have impacted the company's ability to utilize user data for AI training. According to Meta, the training will involve public posts and comments shared by adult users across the 27 EU member states. The company emphasized that it will respect users' choices regarding privacy and will allow them to opt out of this data usage.

Meta's move to leverage public data for AI training follows the recent launch of its AI assistant, Meta AI, in Europe. The assistant was initially made available in the U.S. and other major markets but faced hurdles in Europe due to privacy concerns raised by activists and regulatory bodies. The Irish Data Protection Commission had previously requested that Meta delay its AI training plans until privacy issues could be addressed.

In response to these privacy concerns, Meta has assured users that it will not use private messages for AI training. The company stated, "We will respect all opt-out choices," highlighting its commitment to user privacy and compliance with European regulations. This assurance is crucial, especially as Meta navigates the complex landscape of data privacy laws that grant individuals significant control over their personal information.

Meta's approach has drawn criticism from privacy advocates, particularly from groups like NMYB, led by privacy activist Max Schrems. The organization has filed complaints with national privacy regulators, urging them to prevent Meta from proceeding with its AI training plans. Schrems and others argue that the company's practices could undermine user privacy and violate existing regulations.

Despite these challenges, Meta remains committed to its AI initiatives and has pointed to a December ruling by a committee of EU privacy regulators, which confirmed that its original approach complies with legal obligations. The company has indicated that it will notify users in the EU about the training process and provide a link to an opt-out form, ensuring transparency and user control.

As Meta continues to innovate and adapt its services, the balance between user privacy and technological advancement remains a focal point of discussion. The company's efforts to enhance privacy on WhatsApp and responsibly utilize public data for AI training reflect its recognition of the evolving landscape of user expectations and regulatory requirements.

In summary, Meta's dual focus on enhancing user privacy through new features on WhatsApp and responsibly training AI models with publicly available data illustrates the company's commitment to navigating the complexities of data privacy in an increasingly digital world. As these initiatives unfold, users will be watching closely to see how Meta balances innovation with the imperative of protecting personal information.