Today : Mar 25, 2025
Technology
24 March 2025

Technology Companies Tackle Deepfake Threats As Scams Rise

Innovations in AI aim to combat the growing challenge of manipulated content and maintain cybersecurity standards.

As technology advances, so do the challenges it brings, particularly in the domain of cybersecurity. Companies are actively working to find solutions to combat deepfakes, manipulated videos, and fabricated content, all of which are proliferated by artificial intelligence (AI). This issue has become increasingly relevant as scams perpetrated through this technology continue to rise.

One alarming case involved a woman named L'Oreal who received a phone call via voice manipulation from someone impersonating her mother, Debbie Dudkin, claiming she had been in a car accident and was in the hospital. Shockingly, L'Oreal quickly ended the call and contacted her mother, who was actually safe in her office at the time. Fortunately, L'Oreal was at her grandmother's home, where her elderly grandmother, Ruthie, was targeted by the scam. "These kinds of attempts happen daily. The risk of artificial intelligence could be very high," Debbie, a California resident, stated in an interview with France Press.

Deepfake technology is not only an issue on social media, where fabricated clips of living personas circulates; it is also increasingly utilized by organized crime. In early February 2025, police in Hong Kong revealed that an employee of a multinational company was scammed out of $200 million Hong Kong dollars (about $25 million USD) after being deceived via a video conference that included AI-generated avatars resembling several of the employee's colleagues.

A study published in February 2025 by AI firm iProov highlighted the alarming inability of the public to detect these manipulations: only 0.1% of Americans and Britons tested could accurately identify a deepfake image or video clip. Vijay Palasubramanian, head of Bindirop Security, a company specializing in audio verification, noted, "Less than ten years ago, there was only one device to generate sound through artificial intelligence; today, there are 490 devices." The shift to generative AI has drastically reduced the time needed to create an audio recording: from around 20 hours to merely five seconds.

Organizations are scrambling to adapt, as many companies now provide tools for detecting fake content in real-time, including audio and video. Companies like Reality Defender and Intel are at the forefront of this effort. Intel employs changes in demographic imaging to assist in the detection process through its "FakeCatcher" technology, while Bindirop analyzes each second of recorded audio, comparing it against human voice patterns.

Nikos Peikieridis, head of Attestiv, warned, "Like any cybersecurity company, we always need to stay ahead of the curve." He added that incidents of deepfakes are increasing with advancements in technology. Some scientific publications have downplayed the effectiveness of existing detection systems, yet numerous academic studies indicate that the rates of accuracy in spotting fakes are improving.

The risk of fraud remains particularly high in sectors like finance and insurance, traditionally the leading victims of such attempts. Peikieridis emphasized that this issue has escalated to a global threat to cybersecurity, warning that, "Any company can face reputational damage due to deepfakes or potentially become a target for sophisticated attacks."

As remote work continues to rise, the chances of identity theft through deepfake technology also grow, impacting everyday people, especially vulnerable populations like the elderly, who may be at risk from fabricated calls.

In January 2025, the Chinese manufacturer Honor presented its new Magic 7 smartphone, capable of recognizing when AI is being employed during video calls with real-time alerts. Additionally, the UK-based startup Surf Security is set to launch a web browser designed for companies, which will inform users when they detect AI-generated audio or video content.

In the academic realm, Sui Liu, a computer science professor at the University at Buffalo in New York, commented on the future of deepfake technology, predicting that it will become ubiquitous, similar to spam emails of the past that plagued early internet users but are now largely controlled thanks to filtering software. "AI-generated content has blurred the line between human and machine," Palasubramanian concluded. He future-gazed, suggesting that companies that successfully re-establish the distinction between the two will become immensely valuable in a market projected to be worth billions.