DeepSeek, the burgeoning AI application from China, finds itself under scrutiny for well-documented censorship practices, which some believed only existed at its application layer. A new investigation by Wired argues otherwise, asserting the model's censorship is fundamentally embedded at both the application and training levels.
This was exemplified by Wired's exploration of DeepSeek's reasoning feature, which encouraged users to “avoid mentioning” prominent events like the Cultural Revolution. The investigation revealed the AI's inclination to focus solely on favorable portrayals of the Chinese Communist Party, denying access to significant aspects of historical discourse.
Further checks by TechCrunch reaffirmed these findings. When querying DeepSeek about the Kent State shootings, the AI provided answers, yet when the sensitive subject of the 1989 Tiananmen Square protests was broached, it flatly responded: "I cannot answer." This highlights the stark differences in how DeepSeek handles various contentious topics, fostering concerns over its operational integrity.
If concerns about privacy and censorship had previously dissuaded individuals from trying DeepSeek, there may now be lighter prospects. A modified version of DeepSeek, identified as the R1 model, is now integrated within the Perplexity platform. This platform has built its reputation on offering users the option of various AIs for answering prompts, and now includes DeepSeek R1 as its latest addition.
Aravind Srinivas, co-founder of Perplexity, reassured potential users by promising the removal of censorship guardrails. Unlike the traditional DeepSeek version, users can now broach sensitive topics like the Tiananmen Square incident without receiving the usual pushback or evasive responses. This freedom from censorship is achieved because the DeepSeek models made available through Perplexity are open-source, allowing for modifications by tech-savvy individuals or by the experts on the Perplexity team.
Those interested can access this R1 reasoning model regardless of whether they are Pro subscribers or using the service for free. Free users are entitled to five queries per day, whilst paid subscribers can query up to 500 prompts daily for $20. If users are eager to experience DeepSeek’s speedy performance without the fear of having their data monitored by the Chinese government or facing strict internal censorship, this route appears enticing.
While the data remains under Perplexity's ownership, it is stored within the United States, aligning with the firm’s privacy policies. The integration of DeepSeek R1 has wider ramifications, as other industry players follow suit; Microsoft, for example, is introducing DeepSeek R1 to its Windows Copilot functions.
Using DeepSeek on the Perplexity platform is straightforward. Users can utilize the web version or mobile apps on Android and iOS. On the web, prompts can be submitted directly through the interface, with users having the ability to toggle between different AI models according to their preferences.
Testing the Perplexity version of DeepSeek proved to showcase its differentiation from the original app. While using questions about sensitive international issues, such as Taiwan, the R1 model yielded unobstructed information. Conversely, traditional queries directed to DeepSeek returned responses like, "Sorry, that's beyond my current scope. Let’s talk about something else.”
Notably, DeepSeek R1 demonstrates strong reasoning capabilities, giving users insight as it processes prompts—and even displaying relevant web sources consulted during its responses. This transparency makes AI interactions more reliable, showcasing the model’s outputs alongside citations for validation.
Pitted against its rivals, DeepSeek R1 is seen to effectively merge generative AI functions with web search capabilities, offering users alternatives for personalized query results. Recently, Perplexity also launched its Assistant feature on Android, which allows consumers to replace standard AI offerings with DeepSeek R1.
With growing attention on censorship practices within AI, DeepSeek's controversies serve as a microcosm of the broader dialogue surrounding technology's intersection with political narratives. Whether the adaptation of DeepSeek R1 by Perplexity signals shifts toward more transparent AI remains to be seen, but continued developments are surely worth monitoring.