Today : Oct 19, 2025
Politics
18 October 2025

Deepfake Video Targets Tory MP George Freeman Online

A fabricated AI-generated video falsely showing Conservative MP George Freeman defecting to Reform UK sparks urgent warnings about digital disinformation and the vulnerability of democracy.

On October 18, 2025, Conservative MP George Freeman found himself at the center of a digital storm after a deepfake video circulated widely on social media, falsely depicting him announcing his defection to Reform UK. The video, which used Freeman’s likeness and voice, showed the Mid Norfolk MP seemingly declaring, “the time for half measures is over” and that the “Conservative party had lost its way.” The footage, slickly produced and disturbingly convincing, quickly caught the attention of both political circles and the wider public, raising alarm bells about the growing threat of AI-generated disinformation in British politics.

Freeman, who has represented Mid Norfolk since 2010 and most recently served as Minister of State in the Department for Science, Innovation and Technology, moved swiftly to condemn the video. He categorically denied any intention of leaving the Conservative Party, stating, “I remain the Conservative MP for Mid Norfolk and have no intention of joining Reform or any other party,” according to the Local Democracy Reporting Service (LDRS). The MP made it clear that the video was a total fabrication, created without his knowledge or consent, and that it used his image and voice without permission—a chilling demonstration of the power and reach of deepfake technology in the wrong hands.

“The video is a fabrication, created without my knowledge or consent, and uses my image and voice without permission,” Freeman said, as reported by the BBC. He continued, “Regardless of my position as an MP, that should be an offence.” His comments reflect a growing consensus among lawmakers and experts that existing laws are struggling to keep pace with the rapid evolution of artificial intelligence and its potential to undermine democratic processes.

Freeman did not mince words about the dangers such disinformation poses. “This sort of political disinformation has the potential to seriously distort, disrupt and corrupt our democracy,” he warned. The MP reported the video to the authorities, including Norfolk Police, and called on anyone who encounters the clip to report it immediately rather than share it further. “I have reported this matter to the relevant authorities, and I urge anyone who sees the video to report it immediately rather than share it further,” Freeman urged, highlighting the vital role the public plays in stemming the spread of false information.

The video’s appearance could not have come at a more fraught time for British politics. In recent months, Freeman observed, there has been “a huge increase in political disinformation, disruption and extremism—on both the left and the right, by religious extremists, by dangerous influencers like Andrew Tate, and anti-democratic disrupters.” His remarks point to a broader trend: as technology becomes more sophisticated, so too do the tactics of those seeking to destabilize democratic institutions. Whether the deepfake was the work of political opponents, a “dangerous prank,” or something else entirely remains unclear, but the impact is the same—a public left questioning what is real and what is not.

For many observers, Freeman’s experience is a stark reminder that no one is immune from the reach of digital manipulation. The MP’s prominence—having served in various ministerial roles and as a key figure in the Department for Science, Innovation and Technology—only made the deepfake more plausible to unsuspecting viewers. The video’s spread across social media platforms like Facebook and X (formerly Twitter) underscores how quickly falsehoods can gain traction before they are debunked.

The incident has reignited debate over the adequacy of current laws to address the threat posed by deepfakes and AI-generated content. Freeman’s call for the creation and dissemination of such videos to be considered an offence “regardless of my position as an MP” resonates with many who fear that the legal system is lagging behind technological developments. In the UK, while laws exist to tackle certain forms of online abuse and fraud, the specific challenge posed by deepfakes—especially those targeting public figures—remains a grey area.

Political disinformation is nothing new, of course. But the ability of artificial intelligence to replicate a person’s appearance and voice with uncanny accuracy has taken the problem to a whole new level. Deepfakes can be used to create convincing videos of politicians making statements they never uttered, eroding trust in both individuals and institutions. As Freeman noted, the recent surge in such content is not limited to one side of the political spectrum. “There has been a huge increase in political disinformation, disruption and extremism in recent months,” he said, acknowledging that the threat comes from a variety of sources—be they politically motivated actors, religious extremists, or so-called “dangerous influencers.”

The response from authorities and social media platforms has so far been measured. Norfolk Police confirmed they had received Freeman’s report, though no official comment has been made about any investigation. Facebook, one of the platforms where the video gained traction, has also been approached for comment but has yet to announce any concrete measures in response to this specific incident. The lack of immediate action has fueled calls for stronger safeguards and faster responses to deepfake content, especially when it targets public figures or has the potential to sway public opinion.

Freeman’s ordeal has also prompted reflection within the political community about the personal and professional risks posed by deepfakes. For MPs and other public officials, the prospect of being targeted by fabricated videos is no longer a distant threat but a present reality. The consequences can be severe—ranging from reputational damage to confusion among constituents and even potential security risks. As Freeman himself put it, “I do not know whether this incident was a politically motivated attack by political opponents or just a dangerous prank, but it is clear that in recent months there has been a huge increase in political disinformation, disruption and extremism.”

At the heart of the issue is a simple but pressing question: how can democracies protect themselves from the corrosive effects of AI-driven disinformation? Freeman’s call to action—to report, not share, suspicious content—offers a starting point, but many argue that more robust legal and technological solutions are needed. The stakes could hardly be higher. As the tools for creating deepfakes become more accessible and their outputs more convincing, the risk of widespread public confusion, electoral manipulation, and loss of trust in democratic institutions grows ever more acute.

For now, Freeman remains steadfast in his commitment to his constituents and his party, determined not to let a digitally manufactured lie derail his career or his message. The incident has, however, sent a clear warning to politicians and the public alike: in the age of AI, seeing is no longer believing. Staying vigilant, skeptical, and proactive may be the best defense against a future where the line between reality and fiction grows ever more blurred.