As Hurricane Melissa roared through the Caribbean in late October 2025, a parallel storm was brewing online—one fueled not by wind and rain, but by the rapid spread of phony, AI-generated videos. While the actual hurricane battered Jamaica with Category 5 winds and left a trail of destruction, millions of social media users worldwide found their feeds inundated with dramatic, but entirely fabricated, disaster scenes. The result? Confusion, concern, and a stark warning about the evolving dangers of misinformation in the digital age.
According to CBS News, Hurricane Melissa was “one of the strongest hurricanes ever recorded in the Atlantic,” making landfall in Jamaica on October 28, 2025, and causing at least seven deaths across the northern Caribbean. CNN reported that the storm reached wind speeds of 115 mph and brought heavy rains to both Jamaica and Cuba, making it the most powerful storm to hit the region since Hurricane Dorian in 2019. But as local authorities scrambled to keep residents informed and safe, a flood of AI-generated disaster content began to dominate social media platforms like X (formerly Twitter), TikTok, Instagram, and Facebook.
Some of these viral videos were truly eye-catching. One clip, apparently filmed from above, seemed to show the eye of Hurricane Melissa as seen from a plane’s porthole window—a swirling, doughnut-shaped cloud formation that looked both mesmerizing and menacing. Another video depicted four sharks swimming in a Jamaican hotel pool, supposedly swept in by the storm’s floodwaters. Yet another showed Kingston’s airport in ruins, its runways and terminals ravaged by Melissa’s fury. But none of these events actually occurred. As Full Fact and the Associated Press confirmed, these videos were the work of advanced AI video generators, not eyewitnesses on the ground.
“I am in so many WhatsApp groups and I see all of these videos coming. Many of them are fake,” warned Jamaica’s Information Minister, Senator Dana Morris Dixon, on October 27, as quoted by Agence France-Presse (AFP). “And so we urge you to please listen to the official channels.” Her plea echoed across the island as genuine news footage mingled with synthetic fakes, making it increasingly difficult for the public to distinguish fact from fiction.
The culprit behind much of this digital deception? OpenAI’s Sora 2, a text-to-video tool released just weeks before the hurricane struck. As NewsGuard’s Sofia Rubinson explained, “Now, with the rise of easily accessible and powerful tools like Sora, it has become even easier for bad actors to create and distribute highly convincing synthetic videos.” In the past, viewers might have spotted telltale signs—strange shapes, garbled text, or unnatural motion—that gave away a video’s artificial origins. But as the technology improves, these flaws are vanishing, making deepfakes harder to spot than ever.
Indeed, some creators didn’t even try to hide their work. The earliest version of the viral hurricane-eye video appeared on TikTok on October 26, 2025, and carried a caption admitting, “This is not real, it is a simulation made with AI for a ‘what if’ scenario.” The account’s bio described itself as “AI disaster curiosity,” and its page was filled with similar computer-generated storm scenes. Meanwhile, another TikTok user, Yulian_Studios from the Dominican Republic, posted AI-generated hurricane clips and described themselves as a “Content creator with AI visual effects.” The now-infamous shark-in-the-pool video, which garnered millions of views, was traced back to this account, though it had been removed by the time journalists investigated.
Why do people create these deepfakes in the midst of crisis? AI expert Henry Ajder told the Associated Press that most of the hurricane deepfakes he’s seen aren’t driven by politics, but by the pursuit of clicks and engagement. “It’s much closer to more traditional kind of click-based content, which is to try and get engagement, to try and get clicks,” Ajder explained. On platforms like X and YouTube, viral videos can translate directly into ad revenue or increased followers, providing a strong financial incentive to churn out sensational content—regardless of its truthfulness.
The consequences, however, are far from trivial. As Amy McGovern, a meteorology professor at the University of Oklahoma, told AFP, “This storm is a huge storm that will likely cause catastrophic damage, and fake content undermines the seriousness of the message from the government to be prepared.” Experts and officials alike worry that AI-generated misinformation can overshadow critical safety warnings, leading people to ignore real threats or act on false information. The proliferation of deepfake videos during Hurricane Melissa prompted platforms like TikTok to remove over two dozen clips and multiple accounts after being flagged by AFP. TikTok’s guidelines now require that AI-generated or heavily edited content depicting realistic people or events must be labeled, and the platform prohibits misleading material on matters of public importance.
Yet the arms race between misinformation and moderation continues. Despite Meta’s policies requiring labels for AI-generated videos, similar content appeared on Facebook and Instagram throughout the week. And as Aaron Rodericks, head of trust and safety at Bluesky, told NPR, the public is not fully prepared for a world where fabricated video evidence can be created and distributed at the tap of a button. “In a polarized world, it is easy to create fabricated evidence targeting identity groups or individuals, or to conduct large-scale scams. What once existed as a rumor—like a fabricated story about an immigrant or politician—can now be turned into seemingly credible video proof,” Rodericks cautioned.
So, how can viewers protect themselves from falling for these digital illusions? Journalists and fact-checkers recommend a few key steps. First, check for watermarks or logos indicating a video was produced by tools like Sora; though these can be removed, their absence or suspicious blurring may be a clue. Scrutinize details for oddities—distorted objects, garbled lettering, or unnatural movements. Trust your instincts: if a video seems exaggerated or implausible, it may be a deepfake. And above all, rely on official sources for disaster updates. The Jamaican government and the National Hurricane Center provided regular, verified information throughout the crisis, while platforms like TikTok offered event guides to steer users toward trustworthy content.
The Hurricane Melissa misinformation surge is a wake-up call. With AI video generation tools becoming ever more accessible and sophisticated, the line between reality and fabrication is blurring at an unprecedented pace. As the world faces more natural disasters and global events, the challenge of separating fact from fiction online will only grow. For now, vigilance, skepticism, and a reliance on credible sources remain the best defense against the rising tide of digital deception.