As the race to develop ever more powerful artificial intelligence tools heats up, concerns about data privacy have intensified. A recent comprehensive study by data privacy firm Incogni has shed light on how some of the world’s leading AI platforms handle user data, revealing a wide spectrum of privacy practices among the most popular generative AI (GenAI) and large language model (LLM) systems.
Incogni’s research, published on June 24, 2025, evaluated nine prominent AI platforms against 11 key criteria grouped into three main categories: AI-specific privacy issues, transparency, and data collection. The goal was to assess how these platforms manage sensitive user data, particularly in training AI models, and how clearly they communicate their policies to users.
At the forefront of privacy-conscious AI platforms is Mistral AI’s Le Chat, a Paris-based model that topped the rankings as the most privacy-friendly system. Le Chat impressed researchers by limiting data collection and offering users the option to opt out of having their prompts used to train AI models. It also clearly flagged when user inputs would contribute to future AI development, providing a level of transparency that is rare in the industry. Notably, Le Chat’s mobile applications for Android and iOS were found to collect and share the least amount of data, further cementing its reputation as a privacy-focused platform.
Close behind was OpenAI’s ChatGPT, which boasts an enormous global user base estimated between 800 million and 1 billion weekly active users, according to CEO Sam Altman. Given that nearly 10% of the world’s population regularly interacts with ChatGPT, the platform’s approach to user privacy is critical. Incogni ranked ChatGPT among the least invasive AI models regarding data privacy and praised its transparency. The platform clearly informs users whether their prompts will be used for training, and its privacy policies are notably digestible compared to many competitors. ChatGPT, along with Microsoft Copilot, Le Chat, and xAI’s Grok, allows users to opt out of having their inputs used for training, a feature that enhances user control over personal data.
xAI’s Grok models, associated with Elon Musk’s AI ventures, also scored well, landing third overall. Grok stood out for its user-friendly privacy policies and its respect for user preferences regarding data use in training. However, its Android app was found to share photos provided by users with third parties, a detail that raises some privacy flags.
On the other end of the spectrum, the study exposed significant privacy concerns with platforms from tech giants Google, Microsoft, and Meta. Meta AI, in particular, was ranked the worst for data privacy among the nine platforms assessed. Google’s Gemini model was the second most privacy-invasive overall, though it performed better in AI-specific privacy issues by restricting prompt sharing to necessary service providers and legal entities only. Despite this, Gemini does not offer users the option to opt out of having their prompts used for training, a shortcoming shared by Meta AI and some lesser-known platforms like DeepSeek and Inflection AI’s Pi AI.
Meta AI’s data practices were especially troubling. The platform shares user prompts with corporate group members and research partners, and its mobile app collects precise location and address information, as does Gemini’s app. Google’s AI model and Chinese-owned DeepSeek were uniquely noted for collecting user phone numbers, adding another layer of personal data exposure. Microsoft and Meta also ranked as the most “data-hungry” platforms, collecting and sharing substantial personal data, including from third-party sources such as marketing partners and financial institutions.
Privacy policies across these platforms often add to user confusion. Incogni’s researchers found that most require a university-graduate reading level to comprehend fully, making it impractical for average users to understand the nuances of data handling. The biggest tech companies—Meta, Microsoft, and Google—lack dedicated AI privacy policies, instead folding AI data practices into broad, general privacy statements. In contrast, OpenAI, Anthropic, and xAI provide clearer, more accessible information, often supplemented with support articles that break down complex policies.
Anthropic, another AI developer included in the study, claimed it never uses user inputs to train its models, standing out positively in AI-specific privacy concerns. Meanwhile, Inflection AI ranked worst overall in this category, despite not sharing user prompts beyond necessary service providers.
The implications of these findings are significant, especially as AI tools become deeply embedded in professional and personal workflows. A 2024 report by Cyberhaven highlighted a 156% increase in sensitive data input into chatbots by employees compared to the previous year, with 27.4% of all data submitted classified as sensitive. The National Cybersecurity Alliance and CybSafe’s survey from the same year revealed that over a third of employees using AI at work admitted to submitting sensitive information to these tools, often through personal accounts lacking enterprise-level data protections—a phenomenon dubbed “shadow AI.” Cyberhaven’s data showed that 73.8% of employee ChatGPT use and a staggering 94.4% of Gemini use occurred on personal accounts, amplifying privacy risks.
Incogni’s report underscores the urgent need for clearer, more accessible, and up-to-date information on how AI companies handle user data. As AI integration accelerates, the potential for unauthorized data sharing, misuse, and personal data exposure is growing faster than regulators and privacy watchdogs can respond. The study’s authors caution that maintaining awareness of these evolving privacy risks has become impractical for the average user, especially given the complexity of privacy policies and the opacity of some data practices.
In this rapidly changing landscape, users and organizations must weigh the benefits of AI tools against the privacy trade-offs they entail. While platforms like Le Chat and ChatGPT are setting new standards for transparency and user control, the practices of major tech players highlight ongoing challenges in safeguarding personal data. The findings serve as a wake-up call for both developers and regulators to prioritize privacy as a fundamental pillar of AI innovation.