Today : Dec 27, 2025
Technology
27 December 2025

AI Deepfake Surge And TikTok Deal Ignite US Debate

As AI-generated videos flood social media and TikTok’s US future hangs on a controversial deal, experts warn that current safeguards are failing to keep up with technological and national security risks.

In recent months, the digital landscape has been rocked by a tidal wave of artificial intelligence-generated videos, flooding platforms like TikTok, X, YouTube, Facebook, and Instagram. These AI-crafted clips, many indistinguishable from genuine footage, have sparked a new era of disinformation, raising alarms among experts, lawmakers, and everyday users alike. At the same time, TikTok itself has found its future in the United States hanging in the balance, with a high-profile deal aiming to stave off a ban but leaving many national security questions unresolved. The convergence of these two stories underscores the challenges facing social media in 2025: how to balance innovation, user engagement, and the urgent need for trust and safety.

According to The New York Times, the recent surge in AI-generated videos began in earnest following the launch of OpenAI’s Sora app just two months ago. Sora, alongside Google’s rival tool Veo, can produce highly realistic videos from simple prompts. While some of these creations are harmless—think adorable fake animals or viral memes—others are far more insidious. In one widely circulated example, a fake interview generated by Sora convinced hundreds of viewers it was real. The reactions were swift and, in many cases, ugly: some viewers vilified the woman in the video, others launched racist attacks, and many used the clip as ammunition in political debates, particularly around government assistance programs and President Donald Trump’s proposed cuts.

Experts who track digital misinformation say this new wave of AI fakes is different from anything seen before. Sam Gregory, executive director of Witness—a human rights group focused on technology’s risks—put it bluntly: “Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that. Could they do better in proactively looking for AI-generated information and labeling it themselves? The answer is yes, as well.” Gregory’s comments reflect a growing consensus that the current safeguards are simply not keeping pace with the technological leaps made by tools like Sora and Veo.

Most major social media companies have policies requiring creators to disclose when content is AI-generated and prohibiting deceptive material. But as The New York Times reports, these rules have proven “woefully inadequate” for the new generation of AI content. Platforms largely rely on creators to voluntarily label their fake videos, but many simply don’t. Even when companies like YouTube and TikTok have the technical means to detect AI-generated videos—using metadata or watermarks—they don’t always flag them to viewers right away.

OpenAI and Google have tried to address concerns by embedding visible watermarks (“Sora” or “Veo”) on their videos and including invisible metadata to trace origins. However, as OpenAI itself acknowledged, “AI-generated videos are created and shared across many different tools, so addressing deceptive content requires an ecosystem-wide effort.” TikTok, in response to growing alarm at the realism of these fakes, has announced plans to tighten its disclosure rules and give users more control over how much synthetic content they see.

Yet, the platforms’ incentives don’t always align with public interest. Alon Yamin, CEO of Copyleaks—a company specializing in AI detection—suggested that as long as users keep clicking, there’s little financial motivation for platforms to restrict the spread of viral AI videos. “In the long term, once 90% of the traffic for the content in your platform becomes AI, it begs some questions about the quality of the platform and the content,” Yamin noted. “So maybe longer term, there might be more financial incentives to actually moderate AI content. But in the short term, it’s not a major priority.”

Against this backdrop, TikTok itself has been at the center of a political and national security storm. Last week, TikTok’s CEO Shou Zi Chew announced that the company had signed binding agreements to spin its US operations into a new joint venture with American investors, with the deal expected to close on January 22, 2026. President Donald Trump quickly endorsed the arrangement, declaring that he was “saving” TikTok while protecting national security.

But not everyone is convinced. Former Treasury and Justice Department officials, who worked on TikTok policy during the Biden administration, have voiced skepticism, according to The New York Times. They argue that the deal fails to resolve core national security risks—namely, China’s potential access to the data of roughly 170 million American users and its ability to manipulate TikTok’s powerful content recommendation algorithm. The new structure would reportedly allow TikTok’s Chinese parent company, ByteDance, to license or transfer its recommendation algorithm to the US entity and continue managing “global product interoperability.” In other words, the American app would remain deeply integrated with TikTok’s worldwide platform.

Oracle, TikTok’s US cloud provider, would serve as a “trusted security partner,” monitoring the system and retraining the algorithm. Despite these measures, experts warn that Beijing would still retain leverage over the newly structured US TikTok entity. The arrangement closely resembles an earlier proposal, Project Texas, which the US government previously rejected as inadequate. Under that plan, officials warned there would be “no way to ascertain in real time” whether China was accessing or manipulating TikTok’s data or algorithm, even with enhanced controls and third-party oversight. The Justice Department concluded that simply monitoring ByteDance would not be enough, stating that proper enforcement would require “resources far beyond what the US government and Oracle possess.”

Congress, reflecting these concerns, passed bipartisan legislation last year demanding a clean break between TikTok’s US operations and its Chinese parent. ByteDance itself has admitted that fully severing the US platform from its globally integrated app was “not feasible” on the law’s timetable, citing the vast amount of code maintained by thousands of engineers worldwide, including many in China.

So why is ByteDance still involved in the new deal? The answer, according to experts cited by The New York Times, is a mix of hard constraints and powerful incentives. China placed TikTok’s algorithm on its export control list in 2020, giving Beijing veto power over any meaningful technology transfer. Meanwhile, in the US, financial and political pressures to keep TikTok alive are immense—American investors stand to lose billions if a ban goes through, Oracle would expand its business, and tech giants like Apple and Google want to avoid legal headaches.

For now, the White House has said it approved the TikTok deal after an interagency process involving defense, law enforcement, and intelligence agencies. However, officials and executives are being urged to provide more transparency and testify publicly about their decision-making. As one expert put it, “If the deal truly adheres to the law and protects national security, more transparency would strengthen its credibility. If it cannot withstand scrutiny, that answer would be just as important.”

The twin challenges of AI-generated disinformation and unresolved national security concerns over TikTok highlight the complex, high-stakes battle for trust in the digital age. As technology races ahead, the world is left grappling with questions of authenticity, safety, and sovereignty—questions that demand urgent answers from both tech companies and policymakers.