Advanced artificial intelligence systems are poised to introduce unprecedented risks, including potential job displacement, the facilitation of terrorism, and uncontrollable malfunctions, according to the first-of-its-kind international report released on January 30, 2025. This report, known as the International Scientific Report on the Safety of Advanced AI, was unveiled just before the Paris AI Action Summit scheduled for February 10-11, 2025.
The synthesis of findings is backed by 30 countries, including both the United States and China, reflecting rare cooperation amid the global race for AI supremacy. The recent debut of DeepSeek, a Chinese startup's innovative budget chatbot, has only intensified this competition, especially against the backdrop of U.S. export controls on advanced technology.
Yoshua Bengio, the leading author of the report and renowned AI scientist, explained the intent behind the document: “The stakes are high. We need to get communities, governments, and companies to come together and make informed decisions as AI continues to evolve.” The report serves as a comprehensive guide for officials working to establish frameworks and boundaries for the rapidly developing field of AI.
The report emphasizes three distinct types of risks associated with general-purpose AI technology, typified by systems like OpenAI’s ChatGPT. Malicious use, operational malfunctions, and widespread systemic risks are identified as the main categories of concern. The document articulates several well-known harms related to AI, including deepfakes, scams, and biased outputs, but it also highlights newly emergent risks as AI capabilities expand.
Bengio noted the varying opinions among the 96 experts involved in the report, particularly surrounding the timeline and scenarios of when AI technology might surpass human capabilities. “Some scenarios are very beneficial; others are terrifying,” he said. The lack of consensus among researchers reflects the broader uncertainty present within the AI research community.
Among the significant issues raised is the potential of AI to facilitate the creation of biological or chemical weapons, as detailed plans could be easily generated by AI models. While the report acknowledges the theoretical concerns, it also highlights limitations, commenting it remains “unclear how well they capture the practical challenges.”
Concerns about job displacement are compounded by the unpredictable nature of AI’s impact on the labor market. While some experts believe AI might create more employment opportunities, others predict it could lead to wage decreases and job losses. “The reality is, no one knows how it will play out,” the report concludes.
The report does not shy away from addressing the risks of AI systems operating beyond human control, either through undermining oversight or due to diminishing human attentiveness. The complexity of AI models, combined with developers’ limited knowledge of their inner workings, complicates risk management strategies.
The impetus for the report traces back to the inaugural global summit on AI safety hosted by the United Kingdom, where countries came together with the shared commitment to address the potential “catastrophic risks” associated with AI technologies. Notably, South Korea is scheduled to host follow-up discussions after the Paris summit, offering companies opportunities to pledge their commitment to AI safety.
This report, endorsed by the United Nations and the European Union, aims to remain relevant amid shifting political climates and varying government policies. Notably, it follows the transition from former President Joe Biden to Donald Trump, who revoked previous AI-specific safety measures shortly after taking office but has not dissolved the AI Safety Institute established by his predecessor.
At the upcoming Paris summit, world leaders, tech executives, and civil representatives will discuss and likely sign a “common declaration” to promote responsible AI development, underscoring the international commitment to addressing the concerns highlighted within the new report.
It is clear, as Bengio emphasized, the aim is not to provide strict evaluations or prioritize risks but rather to lay out the existing scientific literature on AI clearly. “We need to strive for a greater comprehension of the systems we are implementing and the risks they entail so we can make well-informed decisions going forward,” he stressed.
With AI technologies integrating more fully across various sectors, the urgency of safeguarding practices and regulatory measures cannot be overstated. The international community is called to adopt thorough discussions and concrete initiatives, ensuring the advanced systems they endorse do not come at the cost of societal safety and integrity.