San Francisco’s tech world is no stranger to bold bets, but Nvidia’s latest move has set even seasoned industry watchers abuzz. On September 28, 2025, Nvidia announced it would invest up to $100 billion in OpenAI, a staggering sum meant to supercharge the next wave of artificial intelligence infrastructure. This isn’t just another chip launch or software update—it’s a commitment to build at least ten gigawatts of AI capacity, powered by Nvidia’s own platforms, with the first gigawatt expected to come online in the latter half of 2026. For context, that’s enough computing muscle to transform how AI models are trained and deployed, and, perhaps, to reshape the very foundations of the digital economy.
According to The Eastern Herald, the outlines of the deal are both simple and audacious. OpenAI, the creator of ChatGPT and a household name in consumer AI, will construct and operate the new infrastructure, while Nvidia supplies not only the hardware but also the capital. As the project progresses, Nvidia will take a non-controlling stake—its investment coming in tranches tied to the actual capacity deployed. The two sides are framing the deal as a marriage of compute and cash, designed to turn pent-up demand for ever-more-capable AI models into a predictable pipeline of funding and supply. In an industry where demand for AI accelerators often outpaces supply, this partnership aims to break the bottleneck.
But is this a visionary leap or a risky gamble? The financial world is split. As AI News Desk reports, some analysts have rushed to raise their price targets for Nvidia, seeing the partnership as proof that AI spending is maturing into multi-year commitments rather than short-lived pilot projects. "The $100 billion headline is the purest version of that assumption yet," one market observer noted, pointing to the sheer size and ambition of the plan. Supporters believe this cements Nvidia’s role at the heart of the AI revolution, ensuring long-term demand for its chips and platforms.
Yet, not everyone is convinced. Critics warn the deal could be a circular financing loop, reminiscent of the telecom and dot-com bubbles of decades past. In those eras, companies like Cisco financed massive network expansions on the assumption of insatiable demand—only to see the cycle snap when reality failed to match the hype. Skeptics argue that Nvidia’s investment could simply return as orders for its own systems, flattering its current growth but increasing concentration risk down the road. As AI News Desk puts it, "Critics warn the deal could be a circular financing loop similar to past tech industry 'house of cards' scenarios, while supporters see it as a strategic move to secure long-term chip demand."
The deal’s structure is designed to address these concerns. Nvidia’s capital is tied to independent facilities, operated by OpenAI and its partners. The company is not acquiring control, and the buildout will proceed in stages, each linked to actual deployed capacity. Still, the sheer scale of the investment has drawn the attention of regulators in Washington and Brussels, who are already probing AI supply chains and the potential for market dominance. Antitrust lawyers are asking whether a dominant supplier deepening its financial ties with a top buyer could tilt competition in a market where alternatives are still maturing.
Technically, the project is a leap into the future. The new infrastructure platform is named Vera Rubin, after the astronomer whose work helped reveal dark matter. Nvidia wants the branding to signal ambition: these aren’t machines for incremental gains but for chasing orders of magnitude. The first gigawatt of capacity on Vera Rubin is expected to generate its "first tokens"—the digital output of AI models—in the second half of 2026. If all goes according to plan, training runs that once took months could be compressed dramatically, and inference at a planetary scale could become routine. That’s the dream, anyway.
The practical challenges, though, are immense. Building AI campuses at this scale is less about racks and servers and more about energy projects with compute attached. Where will the power come from, and at what cost? How quickly can utilities upgrade transmission lines to meet the demand? Which regions will offer the permitting and incentives needed to host these massive data centers, and which will balk at the strain on local grids and water supplies? As The Eastern Herald notes, "AI campuses will look less like rooms of servers and more like energy projects with compute attached."
Within the industry, the partnership is also a bet on timing. Nvidia’s current platforms are considered the gold standard, offering a blend of performance, software, and networking at scale. But technology moves fast. Competitors are narrowing the gap in some workloads, and big cloud companies are developing their own custom accelerators to reduce reliance on Nvidia. Even OpenAI has explored bespoke options, though it’s doubling down on Nvidia for this buildout. Locking in gigawatt-class purchases signals confidence that Nvidia’s next platforms will maintain their edge as AI models become larger, multimodal, and more memory-hungry.
Investors are left to puzzle over what ten gigawatts actually means in revenue terms. Translating power budgets into shipments is tricky, dependent on everything from rack density to software efficiency. What’s clear is that the direction of travel is toward bigger, more industrial AI infrastructure. The economics of AI remain uneven, with some companies reporting big productivity gains from AI copilots and bots, while others are still struggling to find return on investment beyond experimentation. CIOs who greenlit pilots in 2023 and 2024 are now writing checks for production deployments, but they’re also asking tough questions about cost, latency, and data governance.
The partnership also has a political dimension. With a single investment of this size spanning multiple regions, it’s inevitable that regulatory scrutiny will intensify. Agencies in Washington and Brussels are already investigating AI supply chains, and the deal’s scale is enough to focus attention on potential competition impacts. Nvidia’s defenders point out the company’s neutral stance—selling to anyone who buys and supporting multiple frameworks—but tying up with OpenAI at this scale could upset that balance.
There’s a cultural layer, too. AI remains as much a story about status as about technology. Nvidia’s chips have become a shorthand for ambition, OpenAI’s products for possibility. Put them together at industrial scale and you get something investors can understand without reading a white paper: dominance expressed as infrastructure. As The Eastern Herald observes, "If the AI era will be defined by those with the most compute, then the surest way to defend a lead is to fuse the chip roadmap to the customer roadmap."
Of course, the risks are real. Macroeconomic factors like bond yields, energy prices, and export controls could disrupt even the most carefully laid plans. The assumption underpinning the current AI rally is that demand will overwhelm these risks, but history offers cautionary tales. If Nvidia and OpenAI hit their milestones, they’ll have shown the industry can absorb ten gigawatts of specialized compute in just a few years. If delays or economic headwinds mount, the same number could become a symbol of overreach.
For now, the market is treating the announcement as both validation and challenge. It’s a bold move, one that will be measured in utilization charts and real-world deployments, not just headlines. The partnership signals that the AI race is only accelerating, with Nvidia determined to set the pace as long as it can out-engineer—and out-supply—the competition.