Today : Sep 15, 2025
Technology
30 January 2025

DeepSeek Faces Global Scrutiny Over AI Security Risks

Concerns rise as companies block access amid data privacy fears linked to the Chinese AI startup.

Cybersecurity concerns are on the rise as DeepSeek, the Chinese artificial intelligence startup, faces increasing scrutiny from companies and government entities worldwide. Following its swift ascent to popularity, with its AI chatbot recently hitting the top of app download charts, numerous organizations have started to block access to DeepSeek's technologies due to fears about data leaks and inadequate privacy safeguards.

According to cybersecurity expert Nadir Izrael, chief technology officer of Armis Inc., hundreds of companies—particularly those linked to government operations—are taking the preventive measure of restricting access to DeepSeek's services. "The biggest concern is the AI model’s potential data leakage to the Chinese government," Izrael stated, emphasizing the uncertainty around where sensitive information might end up. Similar sentiments were echoed by Netskope's Ray Canzanese, reporting around 52 percent of their clients have blocked access to DeepSeek.

DeepSeek has become the focal point of discussions around privacy and security following recent accolades from well-known tech executives, which propelled the app's downloads to unprecedented levels. The company’s own terms state it collects users’ keystrokes, chat history, and other sensitive content for AI model training, raising alarm bells among privacy advocates.

Compounding these worries, researchers have found significant security holes within DeepSeek’s infrastructure. A publicly accessible database was discovered, containing internal data such as chat histories and technical logs, leading to heightened concerns about the handling of user information. The startup quickly acted to address the vulnerability, but fears surrounding the lack of stringent security protocols remain.

DeepSeek’s privacy practices have caught the attention of several regulatory bodies. Ireland's Data Protection Commission has reached out for information to assess whether DeepSeek is properly safeguarding user data, which is becoming increasingly evident as European regulators tighten their grip on data privacy criticisms.

Italian officials have shown similar concerns, launching their inquiry to ascertain the handling of citizens’ personal data. They have demanded information on the origin of data collected by DeepSeek and whether it is stored on Chinese servers. Failure to respond satisfactorily may lead to significant repercussions for the company.

The U.K.'s Information Commissioner’s Office has also issued warnings, stating generative AI developers must practice transparency with personal data usage.

Concerns are exacerbated by the fact U.S. national security laws allow the Chinese government access to data controlled by companies operating within its borders. This is particularly relevant following the U.S. government's restrictive actions against TikTok, which stemmed from similar worries.

Despite the rising apprehension, many are drawn to DeepSeek because of the impressive performance of its AI models. The company has developed its R1 model based on open-source methodologies, which allows cybersecurity ventures to explore its capabilities. While the advancements are impressive, experts caution against using tools lacking adequate security oversight.

Mehdi Osman, CEO of OpenReplay, alongside others, has decided against utilizing DeepSeek’s API due to security fears. He raised concerns about the low pricing options of DeepSeek luring developers toward potentially risky platforms as they leave more established options from OpenAI.

Cybersecurity analysts have expressed alarm over DeepSeek’s AI services, which they state may lack sufficient operational guardrails to deter malicious usage, such as crafting phishing emails or analyzing stolen data. Levi Gundert of Recorded Future commented on the vulnerabilities, warning of the potential for the model to fuel rapid increases in cyber and fraud attacks.

Researchers highlighted how DeepSeek's AI has gained traction, but the apparent shortcomings of its services raise concerns. The rapid adoption of AI without corresponding security measures can lead to perilous outcomes.

DeepSeek isn't just facing backlash from users concerned about security; it has also drawn attention from other raising tech giants. Reports indicate Microsoft and OpenAI have commenced investigations to determine if DeepSeek unlawfully used APIs and data to feed its AI model.

The situation surrounding DeepSeek highlights the multifaceted challenges posed by the integration of AI systems within global infrastructure. The balancing act of fostering innovative technology against the backdrop of inherent security risks has become more pronounced with DeepSeek's rise. The company's ties to the Chinese government compound these challenges, leading to calls for increased scrutiny and stronger data protection measures.

While the company's advancements open up exciting possibilities within AI, they also reveal the need for greater awareness around data privacy, particularly systems with origins from jurisdictions possessing distinct intelligence mandates. A collective focus on data protection must precede the rush to adopt the latest technology to prevent exposing sensitive information to unwanted prying eyes.

Experts recommend enterprises engaging with powerful AI models like DeepSeek's should adopt stringent security measures, including testing frameworks and thorough assessments of how user data is handled to minimize risks.

DeepSeek’s current predicament exemplifies the urgency of implementing best practices for data security and handling, as the interplay between innovation and security will significantly shape the future of both AI and privacy regulations worldwide.