In a move that’s sending shockwaves through the global tech industry, Nvidia has announced plans to invest up to $100 billion in OpenAI, cementing its status as the linchpin of the artificial intelligence (AI) hardware market. The deal, structured as a massive chip-leasing agreement, will see Nvidia supplying the systems required for at least 10 gigawatts of compute power starting in late 2026. According to Reuters, this bold partnership “intensifies the AI datacenter race,” but it’s also raising eyebrows among analysts and competitors alike, who warn of potential bubbles and the risks of circular financing.
For years, Nvidia has dominated the AI-accelerator sector, holding an imposing 80% market share as of September 2025, according to Susquehanna. But the landscape is shifting. Susquehanna now projects Nvidia’s share could drop to 67% by 2030, with Broadcom and AMD steadily gaining ground. Broadcom is forecast to capture 14% of the market—translating to a whopping $65 billion in revenue—while AMD is expected to secure just over 4%, or around $20 billion. This changing of the guard is driven by hyperscale customers seeking cost-effective alternatives and more control over their AI infrastructure.
Broadcom, in particular, is emerging as Nvidia’s most formidable challenger. The company has quietly built a reputation for delivering custom AI “XPUs”—chips designed in close partnership with cloud giants—and for providing the Ethernet networking that knits AI clusters together. CEO Hock Tan told MarketWatch that the AI semiconductor total addressable market (TAM) could reach “$60 billion to $90 billion” by fiscal year 2027, thanks to three hyperscale clients planning deployments at an unprecedented scale. AI chips now represent the majority of Broadcom’s semiconductor sales, and new, large-scale custom deals are reportedly in the pipeline.
AMD, meanwhile, is ramping up with its MI400 and MI450 series, aiming for broader adoption among hyperscalers through 2026 and 2027. While its market share is projected to remain modest, AMD has made headway by focusing on interoperability with existing server fleets and aggressive pricing. Still, as Barron’s notes, AMD’s data-center growth has lagged behind Nvidia’s in 2025, and the company faces ongoing challenges in matching Nvidia’s software ecosystem and consistent top-tier performance.
Then there’s Marvell, the surprise up-and-comer in the custom XPU and optics space. Rather than attempting to dethrone Nvidia directly, Marvell is carving out a niche as the “arms dealer” for tailored AI silicon and co-packaged optics. The Next Platform reports that Marvell’s custom XPU sales reached just under $300 million in a recent quarter, with AI electro-optics revenue up an eye-popping 4.5 times year-over-year. By providing advanced packaging and high-bandwidth optics, Marvell is enabling cloud giants to build their own accelerators and scale their data centers faster than ever before.
But the competitive landscape doesn’t end with these established players. Tech behemoths like Google and AWS are doubling down on in-house silicon. Google’s TPU v5p and Trillium (its sixth-generation TPU) are setting new benchmarks for training throughput and efficiency, while AWS’s Trainium2-powered instances boast four times the training performance and 30–40% better price/performance compared to existing GPU offerings. These advances don’t necessarily dethrone Nvidia, but they do chip away at its dominance, especially for specific workloads where price and performance are paramount.
And let’s not forget the wildcard: China’s Huawei. Despite facing export controls that complicate its path to global market share, Huawei has embarked on a three-year campaign to overtake Nvidia in AI chips, unveiling next-generation accelerators and “SuperPod” designs. While its international ambitions are constrained, domestic demand in China could still move the needle in the years ahead.
Amid this flurry of competition and innovation, Nvidia’s $100 billion bet on OpenAI stands out not just for its size, but for its potential to reshape the industry. As Barron’s puts it, “Nvidia remains the dominant chip provider amid surging demand for AI-powered technologies.” The deal is expected to reinforce Nvidia’s ecosystem lock-in, thanks to its integrated systems, CUDA software moat, and high-bandwidth memory (HBM) supply pipeline. Yet, the sheer scale of the investment has raised concerns about antitrust scrutiny and the specter of “circular spend”—where funds are recycled between companies, potentially inflating valuations without real economic growth.
Analysts are sounding the alarm about a possible AI bubble. As Finance Monthly and Chosun Biz report, the circular financing nature of the Nvidia-OpenAI deal has heightened fears, particularly in markets like South Korea, about the sustainability of the AI boom. “The fear is that such massive investments could lead to an overheated market, where valuations are driven more by speculation than by actual technological advancements,” notes one analyst. This scenario evokes memories of past tech bubbles, where soaring investments led to painful corrections.
Despite these risks, few expect Nvidia to be dethroned by 2030. Even under Susquehanna’s bearish scenario, Nvidia is still projected to control about two-thirds of a $475 billion AI semiconductor market by the end of the decade. As Christopher Rolland of Susquehanna explains, “Nvidia’s share [will drop] to 67% as major tech companies seek cost-effective alternatives.” The real threat, experts say, is not a loss of dominance, but a squeeze on Nvidia’s profit margins as Broadcom, Marvell, and in-house cloud chips capture more incremental deployments—especially for inference workloads.
What does all this mean for enterprises and investors? According to MarketWatch, organizations should prepare for a multi-vendor accelerator strategy by 2026–2028, using Nvidia for cutting-edge training and considering Broadcom or Marvell for steady-state inference or known workloads. Budgets should also account for the rising importance of optical fabrics and advanced packaging, as these factors increasingly determine the price and performance of large-scale AI clusters.
For investors, the playbook is nuanced. Nvidia remains a core holding, buoyed by near-term demand from the OpenAI deal, but regulatory risks and potential pricing normalization loom. Broadcom offers upside if its custom AI and Ethernet programs hit volume, while AMD is a “call option” on the scaling of its MI400/MI450 chips and the maturation of its ROCm software stack. Marvell, meanwhile, represents a pure play on customization and optics, though its reliance on a small set of hyperscaler customers is a risk to watch.
As the AI gold rush accelerates, one thing’s clear: the market is evolving rapidly, with new challengers, technologies, and business models emerging at breakneck speed. Nvidia’s $100 billion investment in OpenAI may set the pace, but the race is far from over—and the outcome, as ever in tech, remains tantalizingly uncertain.