Today : Sep 19, 2025
Technology
19 September 2025

Huawei Unveils Ambitious AI Chip Roadmap In Shanghai

The Chinese tech giant outlines plans for record-breaking AI chips and supercomputing clusters, intensifying its rivalry with Nvidia and signaling a new era in U.S.–China tech competition.

In a move that is already sending ripples through the global technology sector, Huawei has unveiled a sweeping roadmap for its next-generation artificial intelligence (AI) chips and high-performance computing clusters. The announcement, made at the Huawei Connect 2025 conference in Shanghai on September 18, marks the Chinese tech giant’s most detailed public disclosure yet of its chip ambitions since U.S. sanctions forced it to scale back in 2019. The timing is no accident: the news arrives just ahead of a high-stakes meeting between U.S. President Donald Trump and Chinese President Xi Jinping, and amid growing restrictions on American chipmaker Nvidia’s business in China.

At the heart of Huawei’s strategy is a bold claim: its upcoming SuperPoD and SuperCluster systems will be the world’s most powerful, outstripping even the most advanced solutions from U.S. rivals. According to Reuters, Eric Xu, Huawei’s rotating chairman, presented a detailed timeline for the release of new Ascend AI chips—starting with the Ascend 950 in 2026, followed by the Ascend 960 in 2027 and the Ascend 970 in 2028. Each chip generation is expected to double computing power, with Xu promising, “We’ll keep evolving Ascend chips to strengthen the foundation of AI computing power, both in China and around the world.”

Huawei’s roadmap doesn’t stop at chips. Xu revealed the company’s plans for SuperPoDs—massive computing systems that combine thousands of AI chips to function as a single, ultra-powerful computer. The Atlas 950 SuperPoD, set for launch in the fourth quarter of 2026, will support 8,192 Ascend chips, while the Atlas 960 SuperPoD, due a year later, will scale up to 15,488 chips. Xu claimed the Atlas 950 will "significantly surpass its counterparts on every major metric," with performance figures that seem almost otherworldly: 1,152 terabytes of memory and a throughput of 16 petabytes per second, or 62 times higher than Nvidia’s upcoming NVL144 system.

As reported by Cygnus, Huawei’s SuperPoDs are not just about raw power—they are designed to be the backbone of AI infrastructure for years to come. Xu described the architecture as a “new computing paradigm,” emphasizing that sustainable computing power is essential for the next wave of AI breakthroughs. “Chips are the building blocks of computing power. And at Huawei, Ascend chips are the foundation of our AI computing strategy,” Xu asserted during his keynote address.

For those tracking the global AI arms race, the numbers are staggering. Huawei says its Atlas 950 SuperPoD will have 56.8 times more neural processors than Nvidia’s NVL144 and 6.7 times the computing power. The Atlas 950 SuperCluster, meanwhile, will integrate more than 500,000 Ascend processors, and the Atlas 960 SuperCluster—slated for 2027—will boast over a million chips. These clusters are expected to outperform all competitors, supporting AI models with trillions of parameters and enabling breakthroughs in fields from finance to scientific research.

But the technical leaps are not limited to hardware. Xu also announced that Huawei will open source key elements of its AI software stack, including the CANN compiler and Mind series application toolkits, by the end of 2025. The company’s openPangu foundation models will also be made fully open source. This move is seen as an effort to foster a broader ecosystem around Huawei’s chips, making them more attractive to developers and enterprise customers both inside and outside China.

What’s driving this push? According to the Financial Times and Reuters, escalating U.S.–China tech tensions are a major factor. In recent weeks, Chinese regulators have ordered domestic tech giants like ByteDance and Alibaba to halt purchases of Nvidia’s AI chips, including the RTX 6000D. These restrictions, combined with ongoing U.S. export controls, have severely limited access to foreign semiconductors—making homegrown alternatives like Huawei’s Ascend line all the more critical.

Industry analysts see Huawei’s announcement as both a technological and geopolitical statement. Alfred Wu, an associate professor at the National University of Singapore, told Cygnus, “China is trying to show progress across multiple fronts. But the reality is that tensions with the U.S. are quietly escalating, not easing.” The market appears to agree: Chinese semiconductor stocks rose about 2% following reports of Beijing’s new restrictions on Nvidia, reflecting investor optimism in the shift toward domestic solutions.

Huawei’s chip roadmap is particularly notable for its focus on proprietary high-bandwidth memory (HBM) technology, an area previously dominated by South Korea’s SK Hynix and Samsung. By developing its own HBM, Huawei aims to further reduce reliance on foreign suppliers and improve the efficiency of data transfer within its AI systems. The Ascend 950 chip, for instance, will feature a data transfer rate of 2 terabytes per second—2.5 times faster than the current Ascend 910C.

And the company’s ambitions extend beyond AI. Upgrades to Huawei’s Kunpeng server processors are planned for 2026 and 2028, targeting general-purpose computing and mission-critical applications in sectors like finance and telecommunications. The TaiShan 950 SuperPoD, built on the Kunpeng 950 processor, promises to be the world’s first general-purpose computing SuperPoD, capable of replacing legacy mainframes and boosting performance for databases and real-time analytics.

Of course, building hardware at this scale comes with unique challenges. Xu acknowledged persistent difficulties due to China’s lag in advanced semiconductor manufacturing nodes, but emphasized that Huawei is leveraging its strengths in networking and power infrastructure to compensate. The company’s new UnifiedBus 2.0 interconnect protocol, released as an open standard, is designed to link more than 10,000 neural processing units (NPUs) within a SuperPoD, ensuring high reliability, low latency, and massive bandwidth.

“Our goal is to make sure that the Atlas 950 SuperPoD and Atlas 960 SuperPoD—which will have several thousand or even more than 10,000 NPUs—will work like a computer,” Xu explained. “With these innovations and designs, we've made optical interconnect 100 times more reliable, and extended the range of our interconnect to over 200 meters.”

The stakes could hardly be higher. As AI becomes the dominant driver of computing power demand, nations and corporations alike are racing to secure their place in the new technological order. Huawei’s latest roadmap signals not just a bid for market leadership, but a declaration of China’s intent to build a self-sufficient semiconductor ecosystem—one that can withstand external shocks and compete head-to-head with the likes of Nvidia on the world stage.

Whether Huawei can deliver on these lofty promises remains to be seen. But for now, the company’s vision of SuperPoDs and SuperClusters—backed by proprietary chips, memory, and interconnect technology—has set a new bar for ambition in the AI hardware race. Investors, competitors, and policymakers around the globe will be watching closely as the first systems roll off the line in the coming years.