On August 21, 2025, a cascade of warnings and accusations swept through the highest levels of American technology and civil rights circles, igniting a fierce debate over the future of digital privacy, free speech, and government surveillance. Federal Trade Commission (FTC) Chair Andrew Ferguson fired off letters to thirteen of the nation’s largest tech companies, urging them not to bow to foreign laws that could erode Americans’ fundamental rights. Almost simultaneously, Amnesty International leveled serious charges against U.S. authorities, accusing them of using advanced artificial intelligence tools to surveil pro-Palestinian protesters and target non-citizens, especially immigrants and international students.
According to The Washington Post, Ferguson’s letters landed on the desks of executives at Akamai, Alphabet (Google’s parent company), Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack, and X (formerly Twitter). The timing was no accident. Ferguson’s message was pointed: U.S. tech giants, he warned, face mounting pressure from laws like the European Union’s Digital Service Act and the United Kingdom’s Online Safety Act—statutes that, in his view, incentivize “censorship of speech, including speech outside of Europe.”
He expressed concern that these foreign regulations could lead to increased surveillance of Americans by foreign governments and expose users to greater risks of identity theft and fraud. “Companies might be censoring Americans in response to the laws, demands, or expected demands of foreign powers,” Ferguson wrote, as reported by The Wall Street Journal. “And the anti-encryption policies of foreign governments might be causing companies to weaken data security measures and other technological means for Americans to vindicate their right to anonymous and private speech.”
Ferguson’s anxieties didn’t stop at foreign laws. He criticized the Biden administration for what he described as “actively” working to censor American speech online, while simultaneously praising former President Donald Trump for allegedly putting “a swift end” to the weaponization of the federal government against Americans for their speech. The Supreme Court, for its part, has largely upheld the constitutionality of federal government communications with tech companies on content moderation under the Biden administration, according to Reuters.
The FTC’s focus, Ferguson stressed, is on ensuring that companies maintain strong end-to-end encryption for users, regardless of what foreign governments might demand. “If a company promises consumers that it encrypts or otherwise keeps secure online communications but adopts weaker security due to the actions of a foreign government, such conduct may deceive consumers who rightfully expect effective security, not the increased susceptibility to breach or intercept desired by a foreign power,” Ferguson wrote.
These warnings came during a week of fast-moving developments. Director of National Intelligence Tulsi Gabbard announced that U.S. officials had successfully persuaded U.K. leaders to abandon their demand that Apple provide law enforcement with access to encrypted user cloud data—even for users outside the U.K. The dispute had already prompted Apple to withdraw its Advanced Protection Program feature from U.K. iPhones and computers, a move privacy advocates said was necessary to protect the integrity of encryption for all users. As The New York Times noted, privacy advocates have long argued that any backdoor access for law enforcement fundamentally undermines the security that millions of users rely on, no matter where they live.
But while Ferguson’s focus was on foreign threats and the balance between privacy and regulatory compliance, another storm was brewing at home. Amnesty International, in a report highlighted by France 24, accused U.S. authorities of using AI-powered surveillance tools from Palantir and Babel Street to monitor immigrants and target non-citizens participating in pro-Palestinian protests. The rights group’s review of public records—including documents from the Department of Homeland Security—revealed that these software platforms enable mass surveillance and assessment of individuals, with a particular focus on foreign nationals.
“The US government is deploying invasive AI-powered technologies within a context of a mass deportation agenda and crackdown on pro-Palestine expression, leading to a host of human rights violations,” said Erika Guevara-Rosas of Amnesty International. She added that this surveillance has resulted in “a pattern of unlawful detentions and mass deportations, creating a climate of fear and exacerbating the ‘chilling effect’ for migrant communities and for international students across schools and campuses.”
Amnesty’s research found that the U.S. State Department’s so-called “Catch and Revoke” initiative involves social media monitoring, visa status tracking, and automated threat assessments of visa holders such as foreign students. According to Amnesty, “systems like Babel X and Immigration OS (from Palantir) play a key role in the US administration’s ability to carry out its repressive tactics.” The organization urged Palantir and Babel Street to immediately cease their work with the U.S. administration on immigration enforcement unless they can prove their technology is not contributing to serious human rights abuses.
The political stakes are high. Since beginning his second term in January 2025, President Trump has made campus protests—particularly those involving foreign students and scholars—a flashpoint in American politics. The administration, as reported by AFP, has characterized widespread campus protests and sit-ins calling for an end to Israel’s war in Gaza as “antisemitic,” and moved to expel foreign students and professors who took part. Amnesty International warns that the use of AI surveillance tools risks fueling Trump’s ability to “deport marginalized people on a whim,” particularly as his administration targets universities over alleged political bias and antisemitic policies—claims that, so far, remain unsubstantiated.
These developments have left tech companies, civil liberties advocates, and ordinary Americans grappling with a complex web of legal, ethical, and political challenges. On one hand, U.S. tech giants are being pressed to comply with a patchwork of international laws, some of which may conflict with the values and rights enshrined in the U.S. Constitution. On the other, the domestic deployment of AI surveillance tools is raising urgent questions about due process, discrimination, and the future of protest and dissent in a digital age.
Ferguson’s letters underscore the risk that, faced with a fragmented global regulatory environment, companies might choose to adopt the most restrictive or invasive policies across the board, rather than tailoring their compliance to each jurisdiction. “I am also concerned that companies such as your own might attempt to simplify compliance with the laws, demands, or expected demands of foreign governments by censoring Americans or subjecting them to increased foreign surveillance even when the foreign government’s requests do not technically require that,” he cautioned.
Meanwhile, rights groups like Amnesty International are sounding the alarm that the embrace of AI-driven surveillance—especially in the context of immigration enforcement and protest monitoring—could have devastating consequences for the most vulnerable. The “chilling effect” described by Guevara-Rosas is not just theoretical; it is already being felt by migrant communities and international students, who now must weigh the risks of participating in public demonstrations or even expressing their views online.
As the U.S. government, tech companies, and advocacy organizations continue to clash over the boundaries of privacy, speech, and security, one thing is clear: the digital rights landscape is being redrawn in real time, with profound implications for millions of people both at home and abroad.