Linkedin, owned by Microsoft, is one of the largest professional social networking platforms globally, boasting more than one billion members across over 200 countries and territories. It serves as the go-to platform for individuals seeking to present their professional profiles, showcase their areas of expertise, and expand their professional networks. Users rely heavily on Linkedin not only for networking but also for hunting for jobs and engaging with industry colleagues through specialized groups.
But as helpful as Linkedin can be, it also aims to benefit from its users. Recently, the platform introduced new privacy settings related to generative AI training, which allow the use of user data to train artificial intelligence models. This change was announced by the company, sparking concerns among users about how their data is being utilized.
Linkedin's new setting emerged following updates to its terms of service, with the intent of employing user data for AI enhancements without requiring explicit consent. A report from TechCrunch highlights discrepancies, noting how users within the U.S. have the option to opt-out of data collection for AI training, unlike users from the EU or European Economic Area where stricter data privacy laws apply.
This lack of required consent for AI training caught many users off-guard. Reports from Media 404 revealed confusion among users who were unveiled to the new data usage without prior warning as the platform had failed to update its privacy policy to reflect these important changes.
Linkedin confirmed its plans to use collected data for various AI innovations, including content generation and suggestion features like writing prompts and post recommendations. They acknowledge the usage of user data to improve these capabilities, reinforcing their reliance on extensive data from interactions to refine their platforms.
According to Greg Snapper, Linkedln's Executive Director of Corporate Communications, "If we succeed at this, we can help many people at scale." This quote indicates the company's belief in the transformative potential of AI tools, emphasizing their intention to create new opportunities for users worldwide.
Nevertheless, this initiative has not been without backlash. Numerous critics assert the company has failed to maintain transparency about its data collection methods, pressing for accountability from tech giants. The Open Rights Group, a non-profit dedicated to championing digital rights, has publicly called for investigations, with Legal & Policy Officer Mariano Delli Santi stating, "Linkedln is the latest social media platform discovered to be processing our data without asking for consent."
Continued unrest over these policies has also drawn scrutiny from various media outlets. For example, The Washington Post reported dissatisfaction among users who felt blindsided by the removal of their ability to refuse automatic data usage for AI training, which has become default upon account setup.
With privacy becoming increasingly hot-button, users are encouraged to familiarize themselves with account settings and how to opt-out if they choose. For those seeking to disengage their data from AI training, Linkedln provides the following steps:
- Log in to your Linkedln account.
- Click on your profile picture at the top of the page and select "Settings and Privacy" from the dropdown menu.
- Navigate to the "Data Privacy" section on the left side.
- Select the option "Data for Generative AI Improvement" at the bottom of the "How Linkedln uses your data" section.
- Toggle off the option "Use my data for training content creation AI models."
This push for user agency makes it imperative for individuals to stay informed about their data privacy rights, especially as companies, including Linkedln, leverage massive amounts of content produced by users.
The question remains: can these platforms strike the balance between leveraging technological advancements and safeguarding user data? Users rightfully demand clarity and consent from companies benefiting from their information, underlining the urgent need for policymakers and tech companies to prioritize transparency and respect user privacy. The future of AI data training on social platforms hinges on establishing trust and protection standards between companies and their users.