AI’s Runaway Capex Raises Questions, Signifies Big Changes Ahead

Alphabet (Google), Microsoft, and Meta were among the companies posting quarterly financial results this week. (Amazon will report next week.) The market responded enthusiastically to results from Google and Microsoft, but Meta’s results aroused ambivalence followed by deep concern.  

Beyond the respective revenues and earnings of the technology behemoths and the near-term reactions of the market, a notable takeaway was that AI-related capital expenditures are soaring. In an article published yesterday by the Washington Post, we learned that technology’s largest companies are spending billions of dollars on AI-related datacenters, chips, and electricity. Water consumption, though not cited, is also rising.  

From the Washington Post article:

On Wednesday, Meta raised its predictions for how much it would spend this year by up to $10 billion.Google plans to spend around $12 billion or more each quarter this year on capital expenditures, much of which will be for new data centers, Chief Financial Officer Ruth Porat said Thursday. 
Microsoft spent $14 billion in the most recent quarter and expects that to keep increasing “materially,” Chief Financial Officer Amy Hood said.
Overall, the investments in AI represent some of the largest infusions of cash in a specific technology in Silicon Valley history — and they could serve to further entrench the biggest tech firms at the center of the U.S. economy as other companies, governments and individual consumers turn to these companies for AI tools and software.
The huge investment is also pushing up forecasts for how much energy will be needed in the United States in the coming years. In West Virginia, old coal plants that had been scheduled to be shut down will continue running to send energy to the huge and growing data center hub in neighboring Virginia.

Drawing Inferences

We can draw significant inferences from these developments. 

First, AI is not a pursuit for the light of wallet and faint of heart. As noted here previously, even most large enterprises are not in a financial position to pursue AI in their own on-premises datacenters. The probability of AI success as a business proposition remains uncertain for many firms; a dubious return on prodigious AI-related Capex investments discourages, and often prohibits, on-premises buildouts. If enterprises want to develop AI applications and services, they’ll likely do so on infrastructure owned by the major cloud providers. The result will be further market consolidation for already powerful cloud giants.

An Agence France Press (AFP) article made that exact point, and suggested it was a problem that European and U.S. authorities might attempt to assuage:

The earnings come as Google, Microsoft, Amazon and other rivals competing in the hot field of AI face scrutiny from regulators in the US and Europe.
The US Federal Trade Commission early this year launched a study of AI investments and alliances as part of an effort to make sure regulatory oversight can keep up with developments in artificial intelligence, and stop major players shutting out competitors in a field promising upheaval in multiple sectors.
"Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition," said Lina Khan, head of the Federal Trade Commission, in a statement.
One major concern is that generative AI, which allows for human-level content to be produced by software in just seconds, requires a massive amount of computing power, something that big tech companies are almost uniquely capable of delivering. 

I agree on the diagnosis of the problem, but I don’t see a simple remedy. 

Will the U.S. and Europe, as well as other countries, seek to provide a pool of government-funded datacenters and processing infrastructure for startups and enterprise customers? That would necessarily be an expensive proposition, practically unsustainable for any government in the long haul, though the federal government of Canada moved in that direction recently, setting aside $2.4 billion in its upcoming budget to build capacity in artificial intelligence; about $2 billion of that sum will be allocated to a fund facilitating access to computing capabilities and technical infrastructure.

Governments have a lot of funding obligations for their existing programs, and Western governments are under renewed pressure to spend more on defense amid escalating geopolitical tensions. It’s one thing to announce a significant one-off initiative dedicated to providing compute infrastructure for AI, but delivering such largesse repeatedly and indefinitely is a far more problematic endeavor. 

It will be interesting to see how, or if, governments attempt to mitigate the power of cloud giants as AI becomes a greater factor in the global economy. Perhaps we ultimately will see the development of regulatory frameworks governing equitable pricing and access (for third-party suppliers of AI solutions and customers) to AI-related infrastructure. There would be pushback from the cloud giants, of course, but I wouldn’t summarily dismiss the possibility of something along those lines ultimately coming to fruition. 

No CFO Welcomes Profligate Spending

There are other considerations, too. 

The capital expenditures associated with AI might favor the thick wallets, deep acumen, and global resources of the cloud giants, but that doesn’t mean they enjoy disbursing billions of dollars on a continual basis to feed the voracious appetite and belching furnace of their AI engines. Exorbitant Capex outlays affect AI service pricing, the rate of mainstream market acceptance, and vendor profit margins. No CFO looks fondly on profligate spending.

I think it’s safe to assume that all the cloud giants have prioritized, if not launched, active initiatives aimed at reducing energy costs, increasing datacenter and infrastructure efficiency, and decreasing the costs associated with development and procurement of AI chips and computing infrastructure. 

While the AI boom has initially benefited the likes of Nvidia, there’s no guarantee that the recent past will serve as an accurate predictor of future results. Nvidia’s GPUs are limited in supply, expensive to buy, and costly to run (consuming vast amounts of energy and water), and even granting that Nvidia will strive to improve availability and energy efficiency of its AI chips, cloud operators (as we’ve discussed in this forum previously) will not want to depend on a single external supplier for such an essential infrastructure component. 

The upshot is that AI will encourage and expedite a shift at the major cloud providers toward greater in-house design and development of AI-related infrastructure, from GPUs to CPUs to network infrastructure. Capital costs must be consistently managed and contained, and predictable infrastructure availability will need to be assured. The best way to achieve those results is to control the factors and variables that relate to the infrastructure elements underpinning service delivery.

We have also previously discussed the importance of off-grid energy to AI processing at scale. I cannot envision cloud behemoths wanting to contend with the negative publicity of contributing to a grimy renaissance of coal-fired power plants. The juxtaposition of information-age AI services relying on industrial-era coal would be grotesquely incongruous, not to mention significantly harmful to the environment. No company burning coal to power its datacenters could credibly claim an environmental cloak or boast about sustainability. 

Consequently, look for significant investments in a supply chain that includes small modular reactors (SMRs) and an array of off-grid renewable energy sources, including solar power, wind power, tidal power, and thermal power. Technology giants might also pursue distributed global architectures that feature datacenters in advantageously situated energy corridors, even if off the beaten track, supported by smaller edge environments, nearer to major population centers, featuring intelligent caching capabilities. The global networks that connect datacenters and edge infrastructure are also likely to evolve further to meet new requirements. 

It's important to keep in mind that what we see today is merely a snapshot in time. Some of what we assume to be a constant presence will turn out to be as ephemeral as a one-hit wonder. As the sophistication and cost of developing and delivering new services (such as AI) increases, the companies that deliver those services must similarly innovate and modify their underlying infrastructure to keep pace with the demands of their customers and their own business imperatives. 

Finally, I would not be surprised to see technology behemoths eventually make energy-related acquisitions, excluding fossil fuels, to bring costs under control and assure continuous operation of key services. 

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe