British-Canadian computer scientist Geoffrey Hinton, often referred to as the 'godfather of AI,' is raising alarms about the existential risks posed by artificial intelligence. Following his recent Nobel Prize win, Hinton expressed serious concerns about the speed at which AI technology is advancing and the potential dangers if left unchecked.
During an interview on BBC Radio 4’s Today programme, Hinton discussed how his views on AI have evolved, especially concerning its capacity to surpass human intelligence. Initially, he estimated the chance of AI leading to human extinction at around 10%. Recently, he adjusted this prediction to between 10% and 20%. “Not really, 10 to 20 [per cent],” he stated when asked if his outlook had changed.
Hinton’s apprehension stems from the unprecedented nature of developing technologies more intelligent than humans. “You see, we’ve never had to deal with things more intelligent than ourselves before,” he remarked, stressing the rarity of instances where superior intelligence is effectively controlled by less capable intellects. He compared humans to toddlers next to powerful AI, stating, “Imagine yourself and a three-year-old – we’ll be the three-year-olds.”
His commentary surfaces amid growing concerns about the unregulated development of AI and its potential to create catastrophic outcomes for humanity. Only recently, Hinton resigned from his position at Google, emphasizing the need to sound the alarm about the dangers of AI, particularly how “bad actors” could misuse the technology.
Hinton is advocating for stronger regulations to manage AI’s growth. “My worry is the invisible hand is not going to keep us safe,” he pointed out, criticizing the current approach where profits drive technological advancement. “The only thing... to force those big companies to do more research on safety is government regulation.” Hinton fears without regulation, the consequences could be dire for society.
The risks associated with AI are not merely hypothetical for Hinton. He warns of significant societal impacts, including job losses and increased wealth disparity. “If you have a big gap between rich and poor, it’s very bad for society,” he cautioned, underscoring the necessity for action to prevent overwhelming consequences as AI continues to evolve.
Reflecting on where he expected AI to develop when he began his career, Hinton noted, “I didn’t think it would be where we would be now. I thought at some point in the future we would get here.” The alarming reality is now many experts predict AI systems could surpass human intelligence within the next two decades. This prospect, he calls “a very scary thought,” raises pressing questions about the future.
Hinton’s concerns echo the sentiments of many experts who argue for proactive safety measures before AI reaches its full potential. “We need to be very careful and very thoughtful about developing this technology,” he reiterated, highlighting the duality of AI's promise for advancements, particularly in healthcare and efficiency across numerous industries, against its threats.
The rapid pace of AI development has created an urgent dialogue about the balance between innovation and safety. One of his co-founders and fellow “godfather of AI,” Yann LeCun, has taken a more optimistic stance, implying AI could help humanity rather than posing an existential threat. Nonetheless, Hinton stands firm on the necessity for caution, urging immediate attention from both policymakers and technology developers to address safety and ethical concerns.
Given Hinton’s stature and foundation-laying contributions to AI, his warnings carry significant weight. The tech community and governments may benefit from heeding his calls for regulation and thoughtful development of AI systems to mitigate potential risks. He asserts, “We’re at the beginning of something entirely new, and we don’t know the outcome yet.” The course of AI development, he believes, hinges on our collective response to its challenges and opportunities.