Today : Oct 24, 2025
World News
24 October 2025

Irish Election Rocked By Deepfake Video Scandal

A viral AI-generated video falsely showing candidate Catherine Connolly conceding highlights gaps in Meta’s response and EU digital oversight as Ireland heads to the polls.

Just days before Ireland’s presidential election, a slickly produced deepfake video sent shockwaves through the country’s political landscape. The viral clip, which appeared to show independent progressive candidate Catherine Connolly conceding defeat and withdrawing from the race, spread rapidly across social media platforms—most notably Meta’s Facebook—throwing the integrity of the electoral process into question and highlighting the growing threat posed by AI-generated disinformation to democratic institutions.

According to reporting by Futurism and Euractiv, the video was first posted by a lookalike Facebook account named “RTÉ News AI,” a clear attempt to mimic Ireland’s respected public broadcaster. The 40-second clip was convincingly edited, opening with what looked like a real news desk anchor solemnly announcing, “Catherine Connolly has confirmed her withdrawal from the presidential race.” It then cut to synthesized footage of Connolly herself, complete with RTÉ-styled chyrons and even crowd noises—though, as some eagle-eyed viewers noted, the voices shouting “Catherine, no!” sounded distinctly un-Irish. The video concluded with another fake reporter stating, “simply put, Friday’s election is now cancelled. It will no longer take place as previously planned. But as for Heather Humphreys, she will become the winner automatically and will be appointed tomorrow.”

For nearly half a day, the video was allowed to circulate on Facebook, amassing 30,000 views and being shared hundreds of times before it was finally removed. By then, the damage had already been done: confusion and concern were rife, especially given Connolly’s commanding lead in the polls over her center-right rival, Heather Humphreys. The timing could hardly have been more critical, with the election scheduled for Friday, October 24, 2025.

Upon discovering the deepfake, Connolly immediately filed a complaint with Ireland’s electoral commission. The incident prompted swift action from Coimisiún na Meán, Ireland’s media regulator, which contacted Meta to demand details about the measures taken in response and to remind the company of its obligations under the EU Digital Services Act (DSA). As a spokesperson for the regulator told Futurism, “We have contacted the platform concerned to understand the immediate measures they have taken in response to this incident, and have reminded the platform of their obligations under the EU Digital Services Act relating to protecting the integrity of elections.”

The DSA, which came into force in the European Union to address the risks posed by large online platforms, requires companies like Meta to actively mitigate threats to democratic processes—including the spread of disinformation and, specifically, AI-generated deepfakes. Yet, as Euractiv noted, the enforcement landscape remains patchy. While Coimisiún na Meán has oversight powers over Facebook due to Meta’s EU headquarters being in Dublin, Ireland has not yet fully empowered its AI watchdog to enforce the EU’s broader AI Act. Meanwhile, the European Commission itself has had open proceedings against Meta since April 2024 for suspected DSA breaches—including failures to curb disinformation and coordinated inauthentic behavior—but none of these probes have been closed.

The Commission told Euractiv it was “aware” of the Connolly deepfake and “in touch” with Ireland’s Coimisiún na Meán. “The election guidelines under the DSA provide recommended measures to very large online platforms and search engines to mitigate systemic risks online that may impact the integrity of elections,” a spokesperson said, adding that these include “specific potential mitigation measures linked to generative AI, including deepfakes.”

In the run-up to the election, Coimisiún na Meán had already convened a roundtable with major online platforms—including Meta and YouTube—to discuss their preparedness for handling election-related disinformation. The regulator also produced an election handbook for candidates and journalists to help spot and flag illegal online content. Despite these efforts, the viral deepfake still managed to slip through the cracks, raising uncomfortable questions about the adequacy of current safeguards.

After the video’s removal, Meta released a statement saying it had taken down some content related to Connolly because it violated its policies on voter interference. However, the company could not confirm whether the specific viral deepfake had been removed immediately. According to Adrian Weckler, a technology journalist for the Irish Independent, Meta only acted after his publication contacted the company for comment. “We have a video that disrupted the presidential election, but no one seems to be responsible for it,” Weckler told The Tonight Show, underscoring the confusion and lack of accountability that often plagues such incidents.

Meta also said it had deployed a dedicated team in Ireland to respond quickly to any disinformation threats ahead of the election. Meanwhile, YouTube—where the deepfake was also circulating—terminated a channel called “RTÉ News AI” for violating its Community Guidelines on impersonation. A spokesperson told Euractiv, “We terminated the relevant channel for violating our Community Guidelines, which strictly prohibited channel impersonation.”

This isn’t the first time Ireland has found itself the target of AI-driven hoaxes. Last year, hundreds of Dubliners showed up for a Halloween parade that never existed—an event fabricated by a website based in Pakistan using ChatGPT and manipulative SEO tactics to trick both search engines and the public. While that incident was more farcical than harmful, the Connolly deepfake underscores the far more serious threat posed by AI-generated disinformation when it comes to democratic processes.

Meta’s track record in this arena is, to put it mildly, checkered. The company was at the center of the infamous Cambridge Analytica scandal in 2016, when data from 50 million Facebook profiles was harvested to manipulate American voters. More disturbingly, Facebook has also been implicated in the spread of viral misinformation that contributed to the genocide of Rohingya Muslims in Myanmar. As Futurism points out, “The Irish presidential election might be small potatoes in comparison, but it’s a glaring signal that Facebook and its parent company Meta are still incredibly vulnerable to this kind of malicious interference.”

Regulators, for their part, are scrambling to catch up. The DSA and the forthcoming AI Act represent significant steps forward, but enforcement gaps and jurisdictional complexity continue to hamper swift, decisive action. The split oversight between the European Commission and Ireland’s Coimisiún na Meán—driven by Meta’s EU headquarters being in Dublin—only adds to the confusion, as evidenced by the slow response to the Connolly deepfake.

For now, the episode serves as a stark warning not just to Ireland, but to democracies everywhere: as AI tools become ever more sophisticated and accessible, the risk of election interference through deepfakes and other forms of digital deception is only set to rise. The challenge for regulators, platforms, and civil society is to keep pace—before the next viral hoax tips the balance in a real election.