
50% Of AI Data Centers Have Quietly Been Cancelled Or "Delayed"
AI Summary
In 2025, the world's largest companies reportedly spent around $400 billion on capital expenditures to support the development of artificial intelligence. This sum, adjusted for inflation, is equivalent to nine Manhattan projects or two Apollo programs, all within a single year and solely for infrastructure. To put it into perspective, last year, more money was allocated for constructing and fitting out data centers than was spent on building single-family residential homes during the same period. This figure does not even include non-public companies like Anthropic or OpenAI, for which reliable financial data is harder to obtain, nor does it cover other costs beyond facility construction and fit-out, such as staffing, energy, security, and strategic acquisitions. These numbers are also specific to 2025, and recent announcements suggest that spending this year will once again reach new record highs.
While the substantial figures in the AI industry may not be surprising, it's notable that in the almost four years since the public release of Chat GPT, not a single one of these companies has managed to turn a profit from this technology, even with generous financial projections and accounting methods. The exception has been Nvidia and other upstream hardware and chip manufacturers, who consistently generate profits by supplying the necessary components, akin to selling "pickaxes and shovels" in a gold rush. However, a closer look at the numbers raises questions about where these "shovels" are actually going.
Despite promises of record levels of new spending on data centers, reports indicate that over half of the sites slated to open this year have been delayed or canceled. This presents a logical paradox: how can companies be promising increased spending on data centers while simultaneously experiencing widespread delays and cancellations in their construction? This issue persists even when disregarding how end-customers will finance these developments. Other logical inconsistencies are also emerging: companies claim they cannot keep up with demand for new chips, yet their inventories are growing. They are depreciating hardware over six years, while simultaneously stating that next year's models will render current ones obsolete.
Furthermore, there are significant concerns about powering this infrastructure. Nearly half of the US data centers planned for 2026 are expected to be delayed, partly due to community pushback against having data centers in their vicinity. Some data centers are served by as many as ten natural gas plants, raising questions about energy consumption and demand.
Nvidia, due to its market capitalization, profit, and reinvestment in the industry, has become a pivotal company in this landscape. While it is one of the most analyzed companies in financial history, three major questions about its market position warrant understanding, as any significant change in its price will impact the entire economy.
The first question is: where are all these chips actually going? Jensen Huang, Nvidia's CEO, claimed the company would ship around 10 gigawatts (GW) of GPUs in 2025. While quick calculations based on their product lineup and reported sales suggest this might be a slight overestimation, it's roughly accurate. This figure was mentioned during an announcement of Nvidia's partnership with OpenAI to invest up to $100 billion to help OpenAI build over 10 GW of compute capacity, roughly equivalent to Nvidia's entire annual GPU output. This led to accusations of circular dealing, but the core problem is that, according to Goldman Sachs estimates, only about 7.7 GW of AI data centers are currently operational globally.
Although many new data centers are under construction and will require Nvidia GPUs, the number is not as high as grand announcements suggest. On-the-ground research by Sighteline Climate confirmed earlier suspicions from business journalist Ed Zitron that much data center construction is not as advanced as press releases imply. Of the 21.5 GW of announced capacity expected by 2027, only 6.3 GW was actively under construction, and "under construction" can range from a nearly completed facility to one with only a foundation poured. Recent reports, even during the making of this video, indicate delays in major data center expansion plans, such as Oracle and OpenAI's Stargate Data Center in Abilene, Texas.
Due to the difficulty in verifying numbers from private companies and public companies that merge AI operations with other business segments, precise figures are challenging to obtain. However, the implication is that unless every data center on Earth replaces its graphics cards every 14 months, Nvidia's current production rate would oversupply the existing market. This situation is more complex because measuring data center size in megawatts and gigawatts typically includes the entire input of the center, not just the energy for GPUs. There's also networking, cooling, storage, and processing overhead. The International Energy Agency states that a typical data center dedicates only 46% to 65% of its energy to compute. Even with generous high-end estimates for new and upgraded data centers exclusively using Nvidia GPUs, there isn't enough capacity to accommodate 10 GW worth of GPUs. This suggests either an overestimation of production by Jensen Huang or that the chips are ending up in locations analysts cannot track.
The second major problem facing Nvidia and its continuous chip delivery expectation is power limitation. Industry analysis indicates that the primary bottleneck for new data centers is not advanced computer chips but rather the electrical infrastructure to support them. Power has become the real-world constraint, which is why projects are compared in gigawatts. Components like transformers, power supplies, and generators, though less than 10% of total build cost, are essential. The price of these components, such as Prolle transformers, has more than doubled in four years, and supply struggles to meet demand from both data centers and regular energy grids. Many of these technical components come from China, South Korea, Mexico, and Canada, making tariff disruptions problematic.
This limited supply of data center components forces companies to buy everything they can as soon as possible, even if it cannot be immediately utilized. If Nvidia releases a new batch of chips, companies are compelled to buy them, even without immediate space, to avoid falling to the back of the line. The same applies to cooling and energy supplies. This phenomenon is known as the bullwhip effect in industrial planning, explaining high spending alongside slow project progress. For instance, two fully fitted-out data centers near Nvidia's headquarters in Santa Clara remain offline, awaiting local utility upgrades. Nvidia currently benefits from this, as they can quickly sell chips from TSMC to eager buyers, even if the chips sit in a warehouse for months. This is a risky strategy that could lead to massive oversupply with a minor demand correction, but for now, it's been lucrative.
However, this assumption is starting to strain. Nvidia's annual report showed record sales and profit, but its inventory more than doubled from the previous year and quadrupled from 2024. If demand truly outstrips supply, such a large inventory is illogical. This suggests either difficulty in selling chips or that Nvidia itself is experiencing upstream supply chain issues and is accumulating supply with confidence in future sales. Ironically, an LLM market algorithm first identified this inventory anomaly and a trend of customers taking longer to pay their receivables.
The power issue is compounded by the energy issue. Fully operational data centers consume significant power, and energy prices have risen, partly due to these same data centers. Higher energy prices severely impact the viability and cash burn rates of running these centers. Most data centers rely on local energy grids, leading to higher costs and longer waits for capacity. Newer centers using natural gas turbine generators face doubled natural gas prices, further increasing their largest ongoing expense.
The third major problem for Nvidia is the expected lifespan of these chips. Previously, concerns were raised about capital expenditure on cutting-edge chips that depreciate rapidly. The industry standard among big tech companies is to depreciate GPUs over six years, though their operational viability is often closer to three years. This means they count one-sixth of the purchase price against income to offset taxes and report annual profitability. Stretching depreciation beyond a reasonable service life makes expenses appear lower than they truly are for companies constituting the majority of Nvidia's demand.
Nvidia defended its own accounting practices, though the criticism was directed at its customers, who are the primary buyers of its GPUs. Nvidia itself is primarily a chip designer, with manufacturing handled by suppliers like TSMC, so its depreciation schedules are less relevant. The point was that if companies like Microsoft, Oracle, and Meta adopted more honest accounting, their reported profits would decrease, potentially reducing investor hype around AI and, consequently, their demand for more GPUs. While these companies have other major businesses, a reduction in GPU orders would significantly impact Nvidia's entire business.
Ongoing supply bottlenecks and higher energy prices exacerbate this problem. If Nvidia consistently releases new flagship AI GPUs annually, rendering previous models obsolete, it becomes harder to justify purchasing current-day chips in advance, as they might be irrelevant by the time a data center comes online. Conversely, the depreciation of chips already in service worsens with higher energy prices. Low energy prices can make it worthwhile to run older, less efficient hardware. However, as energy prices increase, margins shrink to the point where the energy cost to run a server exceeds its rental value, effectively turning multi-million dollar racks of hardware into e-waste.
While the market can remain irrational for extended periods, even investor sentiment might be shifting. For the past four years, private credit companies like Blue Owl and BlackRock's Credit Arm have been major financiers of large data center projects. However, they are now facing their own industry-wide problems, which will make it much harder to maintain the supply of easy financing.