Today : Sep 26, 2025
Technology
30 August 2025

Silicon Valley Tech Leaders Invoke Religion In AI Debate

Prominent figures compare artificial intelligence to godlike power, sparking debate over existential risks, utopian promises, and the new faith surrounding technology.

In the heart of Silicon Valley, where innovation often borders on the audacious, the language used to describe artificial intelligence (AI) is taking on an increasingly religious tone. On August 30, 2025, prominent tech leaders and thinkers—many of them household names—are invoking imagery and vocabulary once reserved for the divine as they grapple with the promises and perils of AI’s rapid, largely unregulated development.

Geoffrey Hinton, the 77-year-old Nobel Prize winner known as the “Godfather of AI” for his pioneering work on deep learning and neural networks, is at the center of this new discourse. After leaving Google in 2023, where he worked for over a decade, Hinton has found a new calling: warning the public about the existential risks posed by unchecked AI. “It really is godlike,” Hinton told the Associated Press, reflecting on the technology he helped create. He feels “somewhat responsible” for the current state of affairs and urges, “We’re trying to wake people up. To get the public to understand the risks so that the public pressures politicians to do something about it.”

Hinton’s concerns are echoed by a growing chorus of tech luminaries. OpenAI CEO Sam Altman, whose company is behind the widely discussed ChatGPT, described his technology as a “magic intelligence in the sky.” Altman didn’t stop there—during a TED Talk, he remarked, “You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth.” It’s a statement that’s both awe-inspiring and unsettling, hinting at a future where machines eclipse human intellect.

Peter Thiel, co-founder of PayPal and Palantir, has gone so far as to connect AI’s rise to biblical prophecy, suggesting the technology could help bring about the Antichrist. Speaking to the Hoover Institution at Stanford University, Thiel mused, “There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it’s strange not to try to relate it to the biblical tradition.”

This blending of technology and theology isn’t limited to the doomsayers. Ray Kurzweil, the renowned author and computer scientist, has long predicted a transhumanist future where humans merge with AI. In his latest book, The Singularity Is Nearer: When We Merge with AI, Kurzweil asserts, “By 2045, which is only 20 years from now, we’ll be a million times more powerful. And we’ll be able to have expertise in every field.” He contends that in the coming decades, “We’re not going to actually tell what comes from our own brain versus what comes from AI. It’s all going to be embedded within ourselves. And it’s going to make ourselves more intelligent.” When asked if AI is his religion, Kurzweil replied, “Yes,” noting that his vision for the future shapes his purpose and actions.

Yet, not everyone is convinced that such rhetoric is warranted—or even grounded in reality. Dylan Baker, a lead research engineer at the Distributed AI Research Institute and a former Google employee, is among the skeptics. “I think oftentimes they’re operating from magical fantastical thinking informed by a lot of sci-fi that presumably they got in their formative years,” Baker told the Associated Press. “They’re really detached from reality.” He cautions that the allure of grand, apocalyptic narratives can lead to cult-like beliefs, adding, “These really big, scary problems that are complex and challenging to address—it’s so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it’s the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas.”

Anthropic CEO Dario Amodei, however, offers a more optimistic vision. In his 2024 essay, “Machines of Loving Grace: How AI Could Transform the World for the Better,” Amodei lays out a scenario where AI ushers in a new era of prosperity and liberty. “Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for,” he writes. “If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them.”

Even Meta CEO Mark Zuckerberg, whose company is investing heavily in AI, acknowledges the quasi-religious fervor. “When people in the tech industry talk about building this one true AI, it’s almost as if they think they’re creating God or something,” Zuckerberg said in a podcast appearance. Despite the enthusiasm, he remains skeptical about equating AI with divinity.

Academic experts have begun to analyze this phenomenon, noting its historical and psychological roots. Domenico Agostini, a professor at the University of Naples L’Orientale, points out that the word “apocalypse” originally meant “revelation” rather than catastrophe. “In the ancient world, apocalyptic is not negative,” he explains. Professor Robert Geraci of Knox College, who has studied the intersection of religion and technology, observes that “God is promising a new world. In order to occupy that new world, you have to have a glorious new body that triumphs over the evil we all experience.” Geraci’s 2010 book, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, was inspired by the striking similarities between the language of AI theorists and early Christian apocalyptic texts. “Only we’re gonna slide out God and slide in … your pick of cosmic science laws that supposedly do this and then we were going to have the same kind of glorious future to come,” he notes, adding, “What was once very weird is kind of everywhere.”

Geraci also highlights the financial incentives at play. “Twenty years ago, that fantasy, true or not, wasn’t really generating a lot of money,” he told the Associated Press. Now, with billions at stake, there’s a clear motive for figures like Sam Altman to promote the transformative potential of artificial general intelligence (AGI).

Max Tegmark, a physicist at the Massachusetts Institute of Technology and president of the Future of Life Institute, is another voice urging caution. In 2023, Tegmark spearheaded an open letter calling for a pause in the development of powerful AI systems, gathering over 33,000 signatures, including Elon Musk and Steve Wozniak. Tegmark sees danger in what he calls the “pseudoreligious pursuit to try to build an alternative God,” warning, “There are a lot of stories, both in religious texts and in, for example, ancient Greek mythology, about how when we humans start playing gods, it ends badly. And I feel there’s a lot of hubris in San Francisco right now.”

As AI continues its relentless advance, the debate over its risks and rewards is becoming as much about faith, meaning, and the search for transcendence as it is about technology. Whether these new prophets are heralds of a digital paradise or simply echoing age-old human anxieties remains to be seen. But one thing is clear: the line between science and spirituality in Silicon Valley is becoming ever more blurred.