Italy’s Data Protection Authority, known as Garante, has raised alarm bells over the data practices of AI firm DeepSeek, underscoring potential risks to user privacy. The agency has formally requested clarification from both Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, the companies behind the popular DeepSeek chatbot service, both web- and app-based.
Garante's announcement highlights its concerns over the massive amount of personal data collected by DeepSeek. “Given the potentially high risk for millions of people’s data in Italy, the Authority asked the two companies and their subsidiaries to confirm which personal data are collected, the sources used, the purposes pursued, the legal basis of the processing, and whether they are stored on servers located in China,” it stated. The privacy watchdog has mandated DeepSeek to respond within 20 days, demonstrating the urgency of the inquiry.
Earlier this year, the Italian authority took decisive action against another AI service, temporarily banning the use of ChatGPT for illegal data collection practices and failure to protect the privacy of minors. The ruling against ChatGPT was partly based on the lack of transparency surrounding data usage and insufficient age verification measures. Experts suggest this pattern of scrutiny signals increasing vigilance among regulators over large tech companies’ handling of personal data.
Garante also requested information about DeepSeek's training algorithms, web scraping methods, and systems for notifying users about data collection practices. This follows earlier concerns voiced by the Authority, particularly about the service's potential exposure of sensitive or inaccurate information.
These developments come amid reports of serious security vulnerabilities associated with DeepSeek. A recent investigation by Wiz Research revealed a publicly accessible ClickHouse database belonging to DeepSeek, which exposed chat history, secret keys, and backend details. “Within minutes, we found a publicly accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated,” said the report from Wiz Research. “This database contained sensitive data, including log streams, API Secrets, and operational details.”
The researchers noted the significant risk posed to DeepSeek's security—attackers could have potentially gained control over the database without any authentication needed. This breach indicated the presence of two unusual open ports on DeepSeek’s servers, which provided access to the unsecured ClickHouse database.
Upon their discovery, Wiz Research refrained from executing intrusive queries to preserve ethical guidelines, but the exposure included over one million log entries with plain-text chat messages among highly sensitive information. “Not only could attackers retrieve sensitive logs and actual chat messages,” the report concluded, “but they could potentially exfiltrate plaintext passwords and local files depending on their ClickHouse configuration.”
Reacting to the security breach, DeepSeek issued a statement confirming the limitations it had placed on new user registrations. “Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to assure continued service. Existing users can log in as usual,” the company stated. This measure aims to safeguard the current user base amid looming threats.
DeepSeek's app has consistently ranked among the most downloaded applications across various global platforms, including the Apple App Store. Yet, following the regulatory scrutiny, the DeepSeek AI Assistant app was removed from both the iPhone App Store and Google Play Store within Italy, indicating potential defensive actions amid concerns raised by Garante.
The stage is now set for DeepSeek as it navigates these troubles. With the AI industry under increasing scrutiny, especially surrounding the handling of sensitive personal data, companies like DeepSeek must address these regulatory inquiries and security lapses seriously. The stakes are high, particularly as users—millions of whom trusted the DeepSeek platform—await reassurance about their data privacy and security.
Looking forward, stakeholders and users alike will watch closely how DeepSeek responds to Garante's inquiry and whether they will undertake effective measures to both secure its platforms and assure users about the integrity of their data. The regulatory and technical challenges presented to DeepSeek could serve as important precedents for other firms grappling with similar issues, signal the need for improved data governance practices across the board, and perhaps even shape future regulations aimed at the rapidly growing AI sector.