Today : Sep 26, 2025
Technology
23 August 2025

AI Experts Warn Global Action Needed Now

As AI advances accelerate, leading voices urge urgent international cooperation to prevent catastrophic risks and unchecked technological power.

The world is being swept up in a technological whirlwind, and the warnings from leading experts are growing louder: artificial intelligence (AI) is advancing at a breakneck pace, and without immediate, coordinated global action, the consequences could be disastrous. On August 22, 2025, two of the most prominent voices in the field—tech ethicist Tristan Harris and Geoffrey Hinton, the so-called "father of artificial intelligence"—sounded urgent alarms about the direction of AI development and the risks of failing to rein it in.

Tristan Harris, speaking candidly on "TMZ Live," pulled no punches. He warned that unless the world gets its act together on AI—and all at the same time—"we're all gonna be screwed." Harris described the current state of affairs as an AI arms race, with the United States and China at the forefront, each pushing the boundaries of technology as fast as possible. But, as he put it, "that's a bad flex." Instead of competing to see who can build the most powerful AI models, Harris argued, the real race should be about who can regulate AI most effectively, and that requires global collaboration.

Harris pointed to history for guidance, referencing the 1987 international effort that brought 139 countries together to address the Earth's ozone hole. "We gotta run back that playbook now to rein in artificial intelligence," he urged, emphasizing that the stakes are just as high, if not higher. The implication is clear: if the world could unite to tackle a planetary environmental crisis, surely it can do the same to manage the risks posed by AI.

The urgency of Harris's message was echoed by Geoffrey Hinton, who spoke to Fortune magazine about the existential dangers of the current trajectory. Hinton, who resigned from Google last year specifically so he could speak out freely, compared the unchecked development of AI to a nuclear arms race. He didn't mince words: "We're not ready, and we're not even trying to be." According to Hinton, the driving force behind today's AI boom isn't the betterment of humanity, but "shareholder pressure." The result, he said, is a profit-fueled sprint to build ever more powerful AI models, with little regard for the societal risks or ethical responsibilities involved.

"This is not progress for humanity, this is progress for corporate interests," Hinton remarked, highlighting a fundamental disconnect between the goals of tech giants and the needs of society at large. He warned that the real danger isn't just misinformation or job losses—though those are serious enough—but the possibility that AI systems could surpass human control. If that day comes, he cautioned, "it will be very bad." Hinton's assessment is stark: humanity is neither prepared for nor actively preparing for the moment when AI could outpace human oversight.

Both Harris and Hinton agree on the solution: international cooperation. Harris stressed that both the U.S. and China stand to benefit from keeping AI development in check, warning that "nobody wants bots leveling up while humans spiral down." The alternative, he suggested, is a future where AI has "the juice to straight-up rewrite reality," and rolling out such technology without international agreement on protocols is "a disaster waiting to happen."

Hinton, for his part, called for the establishment of global oversight, international agreements, and shared ethical standards—much like the world did with nuclear treaties. "This challenge cannot be left to individual companies chasing profits," he said. "It demands international cooperation." He urged governments, regulators, and researchers to slow the pace of AI development and shift the focus toward safety, transparency, and long-term thinking, rather than reckless acceleration. The technology, he argued, has already outstripped society’s regulatory capacity, and the gap is only widening.

The warnings from Harris and Hinton come at a time when other experts in the AI community are also raising red flags. Microsoft AI CEO Mustafa Suleyman recently introduced the concept of "AI psychosis," a new psychological risk where individuals lose touch with reality after excessive interactions with AI systems. According to a report from Business Insider, Suleyman described this as a "real and emerging risk," particularly for vulnerable individuals who become deeply immersed in conversations with AI agents. It's yet another layer of complexity in an already fraught landscape.

Behind the scenes, the pressure on tech companies to deliver ever more advanced AI is immense. Hinton alleged that shareholder demands are the main force propelling the industry forward, with billions being poured into model development and data monetization. Meanwhile, as he pointed out, "very few companies are willing to address the existential dangers posed by artificial general intelligence." The absence of a robust ethical framework, he argued, is one of the most glaring weaknesses in the current strategy.

Harris’s analogy to the ozone crisis is more than just a historical reference—it’s a call to action. In 1987, countries set aside their differences to confront a threat that transcended borders. The result was the Montreal Protocol, widely regarded as one of the most successful environmental agreements in history. Harris believes that AI regulation demands a similar level of international commitment and coordination. The stakes, he suggested, are nothing less than the future of human civilization.

But can the world really come together in time? The rivalry between the U.S. and China is fierce, and as Harris noted, both countries are racing ahead, wary of falling behind. Yet, both also have much to lose if AI development spirals out of control. The challenge, then, is not just technical, but political and diplomatic—a test of whether nations can look beyond short-term interests and corporate profits for the sake of long-term human safety.

The debate over whether to pause, regulate, or accelerate AI development is intensifying, and the stakes are climbing ever higher. As Harris put it bluntly, "A.I. has the juice to straight-up rewrite reality," and without a global agreement on how to manage that power, the risks are simply too great to ignore.

With the world at a crossroads, the voices of Harris, Hinton, and others serve as a clarion call. The next chapter in the story of AI will depend not just on technological breakthroughs, but on whether humanity can muster the collective will to guide this powerful force safely and ethically into the future.