Today : Mar 09, 2025
Technology
09 March 2025

Pinterest Updates Privacy Policy To Train AI Using User Data

Controversial changes raise privacy concerns as AI technology evolves.

Pinterest, the visual discovery engine, has stirred controversy with its recent update to user privacy policies, effective March 9, 2025. The changes permit Pinterest to utilize user data, photos, and information to train its artificial intelligence (AI) tools. This means all user-generated content on the platform, dating back to 2010, will now be subject to AI training, raising questions about the extent of privacy users must sacrifice for improved technology.

According to Fruitism, the updated terms published on its website reveal Pinterest's intention to use users' information to develop and improve their technology and machine learning models. The company claims this initiative is aimed at enhancing the products and services they provide and introducing new features. "Nothing has changed about our use of user data to train Pinterest Canvas, our GenAI model," stated a Pinterest spokesperson, emphasizing the continuity of their existing data practices, albeit now formalized.

Users still have the option to opt out of data usage by adjusting their profile settings, according to the spokesperson. Fruitism reported, nonetheless, skepticism remains about the ethical ramifications of such data collection, as user-generated content will be utilized indefinitely for AI training.

Concerns around AI privacy did not end with Pinterest’s policy update. At the SXSW 2025 Conference and Festivals held on March 7, 2025, Meredith Whittaker, president of the Signal Technology Foundation, voiced grave concerns about the privacy risks of what she termed agentic AI. This type of AI operates autonomously, performing tasks on behalf of users without their direct input.

Whittaker articulated the dangers involved: "I think there's a real danger we're facing, in part because what we're doing is giving so much control to these systems..." For example, she described how agentic AI would require access to numerous private data points, including browser history, credit card details, and personal calendars, just to perform simple tasks like finding concert tickets or messaging friends about plans.

Highlighting the transformative, yet invasive, nature of these AI agents, Whittaker remarked, "It would need to be able to drive across our entire system with something like root permission..." She warned about the potential for these systems to compromise user privacy, noting, "There's also the possibility of processing sensitive information off-device, which could lead to vulnerabilities and breaches of privacy."

Adding to the caution, AI pioneer and Canadian scientist Yoshua Bengio echoed similar sentiments during his appearance at the World Economic Forum earlier this year. He discussed the potential dangers of agentic AI, remarking, "All the catastrophic scenarios with AGI or superintelligence happen if we have agents...We must acknowledge the risks and make technological investments to prevent harmful outcomes before it’s too late." His warning highlights the urgency for the tech industry and policymakers to manage these risks thoughtfully.

The growing concern surrounding AI privacy urges users to confront serious trade-offs. A recent article suggests asking, "What do you want out of AI?" For many, the desire for convenience may conflict with privacy expectations. Generative AI, which has become integral to modern technology, often functions on data collected from across the internet, raising ethical questions about content ownership and privacy.

The author recounts their experiences at Automated Insights around 2015, where they had access to extensive personal data. A conversation with their South Korean counterpart on the lack of privacy yielded the phrase, "the trains run on time," illustrating the complex balance between efficiency and privacy. If we think past our immediate desires for advanced AI, the call for accountability and privacy becomes clear.

With AI agents encroaching on less public information, the potential to integrate vast amounts of personal data raises significant ethical questions. Technology advocates and critics alike must engage with the broader conversation on privacy to define the future of AI responsibly.

The complexity lies not only in implementing sophisticated AI technology but also ensuring it respects user privacy. The potential for AI to aid initiatives like human trafficking prevention exists, but the extent to which users are prepared to share personal information remains to be seen. The trade-offs inherent to such advancements are unavoidably complex.

Users must grapple with whether the benefits of advanced AI are worth the extensive data sharing required to realize those benefits. The societal conversations about AI's impact often fail to reflect the complexity of these dilemmas. Users and policymakers alike must engage more deeply with these questions to avert unintended consequences and preserve the integrity of personal data.

"Is this trade-off worthwhile?" becomes the pivotal question as we navigate this uncharted territory of AI innovation and personal privacy.