Warning Signs Flashing Insistently at OpenAI

These risks aren’t shy about announcing themselves

Omniscience is a purely theoretical concept, unattainable in practice for mere mortals who endure finite lives.

Nobody is omniscient, especially those who presume to be omniscient. Rather than striving for omniscience — a waste of time due to its unreachability — we should instead try to minimize our ignorance and stupidity. This defensive posture is the inspiration for strategies that involve risk mitigation.

Forget about trying to get everything right. Instead, try to avoid getting everything wrong. Meanwhile, cultivate the stretch goal of dodging as many mistakes as your wits and risk-mitigation tactics will allow.

Hubris is a failing that infects those of us who forget or misperceive our cognitive limitations. In Greek tragedy, the excessive pride and arrogance of hubris ultimately led to retribution or nemesis.

In our brief human lives, perhaps no terminal ending is good, but some are worse than others. Some endings can include a sequence or series of defeats that include humiliations and indignities, perhaps even relative or absolute pauperization. Nobody wants to experience the ignominy of a long, hard fall, suffering the cumulative misfortune of having one’s backsides thud painfully on every concrete step of the inglorious descent.

All of the foregoing is prologue to a discussion of OpenAI and its constellation of financial and technological enablers, including but not limited to Oracle. Not that long ago, nearly everybody thought that OpenAI, progenitor of ChatGPT, had the world on a string. It was the undisputed AI darling, the nascent technological behemoth that couldn’t miss. Its principals felt their oats, and came to uncritically embrace the hype and obsequious publicity bestowed on their company.

It’s only now, in the clarity of hindsight, that we can see that OpenAI was never as unassailable as its proponents and principals believed it to be. OpenAI is now treading on some treacherous fault lines, which is where risk mitigation might play a useful, if somewhat belated, role.

The company, prodigiously financed to the point of an absurdly inflated valuation, might find itself in a paradoxical position where it must retreat to go forward. To continue charging ahead, without first opting for a tactical retreat, could end in disaster, not only for OpenAI but for its leading investors and principal technology partners, notably the aforementioned Oracle. (More on which later.)

OpenAI is fading fast as it rounds the far turn in its race toward the IPO finish line. A massive IPO, considered a foregone conclusion by many observers just months ago, is suddenly less sure to reach fruition.

Missed Targets and Corporate Intrigue

An article published today in the Wall Street Journal reports that OpenAI recently missed its own targets for new users and revenue, raising concerns among “some company leaders” relating to the company’s capacity to continue spending extravagantly on supersized datacenters.

From the article:

Chief Financial Officer Sarah Friar has told other company leaders that she is worried the company might not be able to pay for future computing contracts if revenue doesn’t grow fast enough, according to people familiar with the matter. 
Board directors have also more closely examined the company’s data-center deals in recent months and questioned Chief Executive Sam Altman’s efforts to secure even more computing power despite the business slowdown, the people said.

The “people familiar with the matter” clearly wish to remain anonymous, but that doesn’t mean their concerns are unfounded. Within the boardrooms and offices of OpenAI, there are legitimate reasons for concern. Allow me to speculate, however, and suggest that the “people familiar with the matter” might be setting the stage — alas, this episode, like everything in big tech business, is performative — for a potential boardroom revolt against Sam Altman. Nobody would articulate it publicly in those terms, of course, but they can drop strong intimations of disaffection that might psychologically prepare the market for a putsch.

Indeed, in the paragraph that follows, we learn that “Friar and other executives are now seeking to control costs and instill more discipline in the business, at times putting them at odds with their CEO, people familiar with the issue said.”

Responding to the negative gist of the article, Altman and Friar then proceed to tell us, officially, that nothing could be further from the truth. Quote:

“We are totally aligned on buying as much compute as we can and working hard on it together every day,” Altman and Friar said in a joint statement Monday night. Any suggestion that the pair are divided or pulling back on securing new computing resources is “ridiculous,” they said.

OpenAI Doth Protest Too Much?

Well, maybe. Another possibility is that Altman is trying to extinguish a fire that threatens to consume him. He’s been there before, and his survival instincts are battle-tested. Nothing is a done deal until everything is in the book, but it’s clear, in the perhaps jaded view of these well-worn eyes, that intrigue is afoot along OpenAI’s mahogany row.

The most amusing statement in the WSJ article emanates from OpenAI, undoubtedly from the company’s marketing department. We learn that “the business is firing on all cylinders and the mood internally is incredibly positive.” This is the sort of cringingly earnest statement companies issue when the cylinders are misfiring and the mood isn’t remotely positive. The use of the hard-working adjective “incredibly” is the giveaway. You can sense the panic in the room.

As the alleged internal intrigue plays out, court proceedings have begun, stemming from a lawsuit by the redoubtable Elon Musk, who seeks to topple Altman and reverse OpenAI’s conversion into a for-profit company. Is it possible that Musk proxies are at least some sources of the Wall Street Journal’s report on OpenAI’s restiveness? Yes, it is, and I wouldn’t dismiss that as a possibility, but there’s enough evidence in OpenAI’s current business missteps and travails to suggest that insiders are displeased with Altman’s leadership.

As if that isn’t enough drama, Oracle’s stock was falling again today, partly as a result of increased scrutiny on its business entanglements with OpenAI. As a Barron’s article explains, Oracle’s shares are down 15% in 2026 amid concern about its capital expenditures associated with AI datacenters. Any uncertainty about OpenAI ramifies into questions that redound on Oracle, recipient of approximately $300 billion (how blithely we now type such mind-bending numbers) for OpenAI-related datacenter buildouts through 2030. Notes the Barron’s article:

The OpenAI deal accounts for more than half of Oracle’s $553 billion cloud-computing backlog—the total value of future revenue it is expecting, but has yet to collect. It is the cornerstone of the company’s pivot under the leadership of executive chairman Larry Ellison from software toward providing AI infrastructure services.

If Trump and OpenAI both falter, Larry Ellison will find himself and Oracle in a difficult spot. He’s wagered heavily on both, in different ways, to propel Oracle from also-ran cloud status into a perceived AI-cloud play with a bright future.

Perhaps OpenAI will be okay. Maybe it will reassert itself and the doubts will vanish. Still, risk mitigation is a good idea, especially when the risks haven’t been shy about announcing themselves.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe