Today : Oct 19, 2025
Politics
19 October 2025

Deepfake Video Falsely Claims MP George Freeman Defected

A widely circulated AI-generated video falsely showed Conservative MP George Freeman joining Reform UK, prompting urgent warnings about political disinformation and a police investigation.

On October 18, 2025, a deepfake video featuring Conservative MP George Freeman, purportedly announcing his defection to Reform UK, began circulating widely across social media platforms. The video, which appeared to show Freeman stating, "the time for half measures is over" and that the "Conservative party had lost its way," immediately caused a stir within British political circles and beyond. But as quickly as the video spread, Freeman himself denounced it as an AI-generated fabrication, sparking a broader conversation about the dangers of deepfakes and misinformation in modern politics.

Freeman, who represents Mid Norfolk and has served in various ministerial roles—most recently as Minister of State in the Department for Science, Innovation and Technology—moved swiftly to clarify his position. In a statement posted to his social media accounts, he declared, "I remain the Conservative MP for Mid Norfolk and have no intention of joining Reform or any other party." He described the video as a "fabrication, created without my knowledge or consent, and uses my image and voice without permission. Regardless of my position as an MP, that should be an offence."

According to Metro, Freeman reported the incident to the police and urged the public to do the same if they encountered the video. "I have reported this matter to the relevant authorities, and I urge anyone who sees the video to report it immediately rather than share it further. Robust action must be taken by all to tackle the growing issue of 'fake news' and this includes the social media platforms," he said. Norfolk Police confirmed they had been contacted, though they declined to provide further details at the time.

The deepfake video, which was widely shared by accounts such as "Brexit Brian," showcased the increasing sophistication of AI-generated content. The technology, powered by tools like OpenAI’s Sora and Google’s Veo, has made it easier than ever for individuals to create convincing but entirely false videos. Although these companies have implemented controls to prevent misuse, experts told Metro that the risk of misinformation and disinformation remains high.

Freeman’s situation is not an isolated incident. As he pointed out, there has been a "huge increase in political disinformation, disruption and extremism" in recent months. In his words, "the intelligence services are clear that Russia and other rogue states are engaging in cyber disruption on an industrial scale, alongside organised cyber crime." He emphasized that the deliberate spread of disinformation through AI-generated content—whether for political indoctrination, fraud, or other purposes—is "a concerning and dangerous development."

"As a Member of Parliament, this sort of political disinformation has the potential to seriously distort, disrupt and corrupt our democracy," Freeman warned, echoing concerns raised by security experts and government agencies. He added, "It is profoundly disappointing to witness. This kind of behaviour undermines public trust, damages democracy, and represents a direct attack on the integrity of our democratic process."

Freeman was careful to note that he did not know whether the deepfake video was the work of political opponents or simply a "dangerous prank." Regardless, he insisted, "it is clear that in recent months there has been a huge increase in political disinformation, disruption and extremism—on both the left and the right, by religious extremists, by dangerous influencers like Andrew Tate, and anti-democratic disrupters." He called on everyone who encounters such content to report it rather than amplify it: "I urge anyone who sees the video to report it immediately rather than share it further."

The Local Democracy Reporting Service and BBC both highlighted the broader implications of the incident. Freeman’s experience illustrates the very real threat that deepfakes and AI-powered misinformation pose to democratic institutions and public trust. As digital tools for creating synthetic media become more accessible and sophisticated, the challenge of distinguishing fact from fiction grows more acute.

Security experts have long warned about the potential for deepfakes to erode trust in public figures and institutions. According to Metro, "recent developments in AI video using tools like OpenAI’s Sora and Google’s Veo have made it easier than ever for people to create convincing footage, although both have controls in place intended to make it difficult to impersonate real people." Nevertheless, the technology’s rapid evolution means that bad actors can still slip through the cracks, creating "a real risk" for misinformation and disinformation, as experts told the publication.

Freeman’s case has prompted renewed calls for legislative and technological solutions to address the challenge of AI-generated disinformation. "Regardless of my position as an MP, that should be an offence," Freeman insisted, suggesting that the law should be updated to criminalize the unauthorized use of a person’s likeness and voice in deepfakes. While current UK law does address certain types of online harassment and identity theft, the legal framework has yet to catch up fully with the new realities of AI-driven media manipulation.

Freeman’s history as a parliamentarian and minister lends weight to his warnings. Having served as an MP since 2010 and holding senior roles in government, he is no stranger to the pressures of public life. But as he noted, the scale and sophistication of recent disinformation campaigns represent something new—and deeply troubling. "The deliberate spread of disinformation through AI-generated content—whether aimed at stealing identity for fraud, mis-selling, political indoctrination or any other purpose—is a concerning and dangerous development," he said, as reported by ITV News.

The incident has also placed renewed scrutiny on social media platforms and their role in amplifying or controlling the spread of fake content. Freeman called on these companies to take "robust action" against fake news, arguing that the fight against disinformation requires cooperation from all stakeholders, not just law enforcement or politicians.

As of this writing, Norfolk Police have been contacted for comment, but the investigation remains ongoing. In the meantime, Freeman’s message to the public is clear: vigilance and responsible action are essential. "I have reported this matter to the relevant authorities, and I urge anyone who sees the video to report it immediately rather than share it further."

Freeman’s ordeal serves as a stark reminder of the challenges democracies face in the digital age. The rise of deepfake technology—while offering creative and commercial potential—carries significant risks when wielded irresponsibly or maliciously. As lawmakers, technology companies, and citizens grapple with these new realities, the integrity of public discourse and the health of democracy itself may well depend on their collective response.