In a significant shift in the tech industry, Apple is reportedly set to unveil a novel translation feature in its AirPods, according to a recent report from Bloomberg. This new capability aims to allow users to engage in conversations in different languages seamlessly.
The anticipated launch is part of the upcoming iOS 19, expected later in 2025. Apple’s entry follows its competitors, who have already introduced similar features. Google first made headlines in 2017 by integrating translation technology into its Pixel Buds, while Xiaomi has incorporated instant translation features into its Buds 5 Pro and Samsung with its Buds 3 Pro.
Yet, the dream of a universal translator—a concept long coveted in both technology and science fiction—remains elusive. The recent advancements in translation technology have made it closer to a reality, yet significant challenges remain. Language, particularly spoken language, proves to be intricate and challenging to capture adequately through artificial intelligence.
Looking back at the science fiction realm, Douglas Adams brought to life the idea of a universal translator through the Babel Fish in his iconic work, The Hitchhiker’s Guide to the Galaxy, while Star Trek fans fondly recall Captain Kirk utilizing a handheld device before the advanced communicator in The Next Generation gave crew members instant translation capabilities.
Notably, the path to bringing universal translation technologies to consumer electronics has seen considerable progress over the past decade. The breakthrough of Large Language Models (LLMs) in 2022 marked a pivotal moment in this journey. However, despite advancements, the technology remains a work in progress for everyday use. A recent test by TechRadar on Timekettle's translation earpieces during the IFA beurs in September 2024 reported some limitations: "I would like the translations to come faster for a more natural flow."
Similarly, Wired's review of the Vasco Translator E1 highlighted the necessity for users to speak slowly and clearly: “As with all live translation tools, you need to speak reasonably slowly, take a pause every one or two sentences, and pronounce words as clearly as possible.” These reviews illustrate the challenges facing real-time translation technology today.
One major hurdle still to overcome is latency—the delay between speaking and translation. Current evaluations indicate a time gap of approximately 2.5 seconds for earpieces to interpret speech. Many products still require the aid of a smartphone, complicating the process further. Moreover, achieving high accuracy in translation involves navigating accents, dialects, street language, and multiple speakers—all of which pose substantial barriers to effective communication.
Nonetheless, some companies are tackling these challenges head-on. Timekettle, founded in 2016 with a vision of creating technology akin to the Babel Fish, is making strides in the field. Their recent software initiative, Babel OS, aims to enhance speed and accuracy through the segmentation of spoken sentences. This method divides spoken phrases into manageable parts, allowing different algorithms to analyze them more deeply.
Additionally, Timekettle is experimenting with voice cloning technology that could mimic the speaker’s voice in translations, enhancing the user experience and authenticity of communication. Despite these innovations, we should not underestimate how existing technology has already begun to break down language barriers. For instance, savvy travelers have been translating foreign language signage through smartphone camera features for over a decade using apps like Google Translate.
While the technology is not without flaws, its evolution demonstrates a significant leap forward, hinting at a future where seamless multilingual communication is only a step away. Captain Kirk, a fictional future captain, would surely appreciate the progress that technology has made towards achieving the once far-reaching goal of universal translation.