Taking in the Big Picture at OpenAI

It’s nearly impossible to keep up with all the announcements relating to genAI. Something new – funding, partnerships, releases of new or improved LLMs, executive hires, launches of strategic roadmaps – jumps to the front of the queue each day, to be brusquely displaced by something else, often later the same day. 

Anybody who chooses to write commentary on such ephemeral developments is fighting a losing battle. It’s like trying to pin jelly to the wall with a steak knife or trying to catch a floating soap bubble in your hand. In attempting the task, one is struck by the utter futility of the effort and the pointlessness of the exercise. Why expend thought and dedicate valuable time to something so fleeting that it will be eclipsed and forgotten within a few days? It's a thankless task, yes, but it’s worse than that: It brings home the fundamental insignificance of the disparate procession of events and the absurdity of any sincere real-time effort to frame them, as individual shards, in a self-contained context.

Many years ago, I worked with an unusually philosophical software architect who, during a conversation one day over coffee, plaintively expressed the opinion that everything he produced was essentially meaningless and on its way to worthlessness. In his view, none of the code he helped bring to life would persist longer than a product cycle, if that long. He had a point: posterity will not remember any of this stuff. I try not to dwell on these thoughts, but sometimes I do. 

With genAI, the hyperactive clatter of the tech industry has sped up even further. The staccato rhythms of successive announcements are strident and percussive, like machine-gun fire. How can anybody make any real sense of it? In a few weeks, will anybody even remember, with any clarity, what was announced last or this week? 

The challenge is to claw some distance, achieve some detachment, from proceedings. If you can get to a semblance of objectivity by removing yourself from the frenzy, you might be able to discern a bigger picture, preferably one that is more than the expressionistic daubs and slashes you see when you’re pressed up against the moving canvas or when you're too close to the shards that constitute a mosaic.

Let’s zoom out, then, and take a look at what’s happening at OpenAI, which in its relatively short lifespan has packed in enough activity to suffice for companies twice its age. In the last year alone, in fact, OpenAI has experienced more drama and intrigue than most companies sustain in a corporate lifetime. 

As the LLM Turns 

feature article in Fortune recounts some of the melodrama at OpenAI. After reading the article, one is impressed by the worldly shrewdness and cutthroat machinations of the ostensibly cerebral PhDs and academic researchers who constitute some of the company’s principal dramatis personae. At least a few of them appear to be as ambitious, avaricious, and ruthless as any Silicon Valley besuited snake. In a way, the ruthless drive and devious treachery humanizes the AI geniuses. They’re just like us, the reader rationalizes, but far better at math and science. Baseness is our common ground.

Is the tale of OpenAI Shakespearean? Is it tantamount to classic literary tragedy? 

No, I wouldn’t go that far. But it is a lot like a soap opera. You can imagine the titles: Bots of Our LivesAs the LLM Turns, One Funding Round to Live, The Young and the Relentless, All My Generative Children, or perhaps Guiding Guru. The action-packed story of OpenAI is more like a soap opera because it’s poorly scripted, badly acted, marked by arbitrary plot twists, populated by an every-changing cast of characters, and trashily engrossing. Onlookers guiltily watch the unseemly proceedings, transfixed by the unedifying sight of a sordid office-politics Olympiad, featuring the main event of climbing a greasy pole to riches. 

Despite the big-budget soap opera’s tawdry episodes and bad acting, OpenAI is pulling in obscene investment capital. In the rapidly obsolescent vernacular of the technology world, private startups achieving a $1-billion valuation were called unicorns, principally because of their alleged rarity. That term no longer retains cache in amid the fever dreams of genAI. 

What terminology, what neologism, can you apply to OpenAI, though? As of its latest round of funding, announced this week, OpenAI has ostensibly attained a market valuation of (incredulously checks article) $157 billion. OpenAI has left the world of unicorns in the dust, forced us to rethink the quantitative parameters of a stratospheric startup destined for the gauzy AI firmament. 

Will we need a new language, understood and spoken only by abyss-pocketed venture capitalist and superfunds, to designate the seeming absurdity of a startup company – which loses about $5 billion annually – commanding a valuation of $157 billion? It would have been the stuff of fiction not all that long ago, but now it’s true. 

Just thinking about the valuation of OpenAI boggles the mind, even before one considers the crafty chicanery and high-elbowed office politics that seemingly animate the company on a weekly basis. In trying to depict the intraoffice jousting, one doesn’t know where to begin. OpenAI is a cage match of endless tumult, marked by fierce gambits, inelegant retreats, and endless duplicity.

Boardroom Putsch 

We all know about the boardroom putsch that forced out Sam Altman as CEO. It seems so long ago now, but it was last November, which, in the grand sweep of time, is no time at all.  A week after Altman’s defenestration, of course, he was back on the throne, and the board members that toppled him suffered their own ousters. The reconstitution of the board to something more to Altman’s liking and likeness was, in retrospect, the first step toward a remaking of OpenAI from a non-profit structure to one that will be all about achieving heretofore elusive profitability.

The ongoing metamorphosis, from the OpenAI that had a non-profit mandate to one that will be all about the money, is primarily responsible for accelerating the speed of the revolving door that serves as the entry and exit point for executive and employee arrivals and departures. In all likelihood, Sam Altman likely saw the commercial promise of the company from its inception, but he had to play a long game, waiting for the right time to trigger a radical transformation. I’m sure Elon Musk, who was an early investor in OpenAI before parting ways with Altman and his colleagues, also saw OpenAI’s considerable commercial potential, but even he likely had only a vague idea that the company could become so richly valued.

With visions of money and industry influence dancing alluringly in his mind’s eye, Altman had to figure out how to bend the organization to his will. That would require a new cast of characters, and some swaggering investors – strategic and otherwise – who could open doors and expedite a well-heeled journey. 

Fortunately for Altman and his commercial backers – though perhaps not as fortunately as it might appear, as we will discuss later – the unrelenting demands of the market helped to drive out OpenAI staffers who weren’t as incentivized by the unconstrained profit motive. Enormous pressure accompanies the acceptance of vast sums of money and the attendant pursuit of profit at scale. Sacrifices are demanded. Some people won’t make those sacrifices, refuse to countenance the external pressure, and will not tolerate the storm before the calm. 

The Fortune article on OpenAI lists many of the high-profile departures, including Mira Murati, CTO; Bob McGrew, chief research officer; and Barret Zoph, vice president of research. Others have left OpenAI, too. In fact, the exits at OpenAI have become high-speed windmills, as the following excerpt from the Fortune article explains:

In addition to the senior executives who announced their departures last week, Ilya Sutskever, an OpenAI cofounder and its former chief scientist, left the company in May. 
Sutskever had been on OpenAI’s board and had voted to fire Altman. Following Altman’s rehiring, Sutskever—who had been leading the company’s efforts to research ways to control future powerful AI systems that might be smarter than all humans combined—never returned to work at the company. He has since founded his own AI startup, Safe Super Intelligence.


After Sutskever’s departure, Jan Leike, another senior AI researcher who had coheaded the so-called “superalignment” team with Sutskever, also announced he was leaving to join OpenAI rival Anthropic. In a post on X, Leike took a parting shot at OpenAI for, in his view, increasingly prioritizing “shiny products” over AI safety.
John Schulman, another OpenAI cofounder, also left in August to join Anthropic, saying he wanted to focus on AI safety research and “hands on technical work.” In the past six months, nearly half of OpenAI’s safety researchers have also resigned, raising questions about the company’s commitment to safety.


Meanwhile, OpenAI has been hiring at a frenetic pace. During Altman’s brief ouster in November, OpenAI staff who wanted to signal their support for his return took to X to post the phrase “OpenAI is nothing without its people.” But increasingly, OpenAI’s people aren’t the same ones who were at the company at the time.


OpenAI has more than doubled in size since “the Blip,” going from fewer than 800 employees to close to 1,800. Many of those who have been hired have come from big technology companies or conventional fast-growing startups as opposed to the niche fields of AI research from which OpenAI traditionally drew many of its employees.


The company now has many more employees from commercial fields such as product management, sales, risk, and developer relations. These people may not be as motivated by the quest to develop safe AGI as they are by the chance to create products that people use today and by the chance for a giant payday.

The influx of new hires has changed the atmosphere at OpenAI, one source said. There are fewer “conversations about research, more conversations about product or deployment into society.”

At least some former employees seem distressed at Altman’s plans to revamp its corporate structure so that the company’s for-profit arm is no longer controlled by OpenAI’s nonprofit foundation. The changes would also remove the existing limits, or cap, on how much OpenAI’s investors can earn.

One of Life’s Paradoxes 

The executive and staff departures noted by Fortune and other business publications are likely just a partial account of the goings of longtime OpenAI employees and the comings of new staffers, the latter more attuned to and incentivized by the pursuit of the payoff. For every marquee name that has left OpenAI, one might reasonably assume that at least two or three other significant contributors have also taken their leave. 

From the perspective of Altman and the company's growing array of investors, the transformation of OpenAI from a non-profit entity to a profit-seeking corporation is absolutely necessary. Still, it’s fair to wonder whether the new hires, despite their skills at commercializing technology and software, possess the same degree of expertise and knowledge that belonged to OpenAI’s departing cast of researchers and other specialists. OpenAI might discover belatedly that the employees and product teams that prioritized safety and due diligence were also the organization’s best genAI researchers and developers

Life is full of paradoxes, some more disconcerting than others, and it might well be that OpenAI’s headlong rush to metamorphosize into a for-profit entity resolves into an organization that is even less capable of achieving profitability. You would need ironic detachment, as opposed to a large investment in OpenAI, to appreciate such a scenario. 

At any rate, profitability is not a foregone conclusion for OpenAI. The company is reportedly hemorrhaging about $5 billion a year (the valuation of five outdated unicorns) and some of its cost issues have less to do with OpenAI as a company and more to do with the generic particularities of the genAI space in which the company competes. 

As we’ve discussed before, genAI is a scale game, requiring prodigious investments in datacenters, infrastructure, data and models, expertise and knowledge, and seemingly copious amounts of energy. Competing in genAI is not the same as mastering the relatively straightforward financialization and abstraction that displaced cab drivers, hotels, and food delivery. It’s a different game entirely, and the price for a seat at the table is prohibitive. The demand for capex outlays isn’t a one-time thing, either; the need is seemingly endless. Only the strong and the big, hauling endless bags of money to and fro, will survive; of course, it helps if you’ve already got a wildly profitable business to sustain your eye-watering investments in genAI. 

On the other side of the ledger, involving the prospect of realizing surging revenue growth from paying customers, the picture is not much brighter. OpenAI is bringing in revenue, but the cost of producing that revenue is heavy. To complicate matters further, large enterprise customers that would yield more attractive margins remain skeptical of genAI’s ROI potential and broad-based utility. Such customers are proceeding warily, still figuring out where the ROI and value justifies the considerable outlays.

More Episodes Forthcoming  

Some observers, including Mark Cuban, contend that the costs associated with genAI services will go down. Indeed, it’s reasonable to assume that many entrepreneurs and innovators have readily diagnosed genAI’s current slate of problems, including exorbitantly expensive GPUs/accelerators, extravagant energy consumption, and ungainly and solipsistic LLMs. Work is being done in each of those areas to make life less onerous and less costly for genAI purveyors.

Still, some of that work will take more time than many optimists assume. Further, it’s not clear that OpenAI, which was built on a foundation of assumptions aligned with the earlier cap-heavy genAI market, will be advantageously positioned to take advantage of the innovations when they do arrive. For OpenAI’s investors, these questions and concerns should be evident, but they’ve chosen to take their chances. Some investors have other irons in the proverbial fire, anyway. 

With this latest funding round, OpenAI might have put itself beyond the realm of exit via acquisition. It will have to find its way to exit through other means, perhaps IPO at some point. To get there, however, it will need to articulate, if not traverse, a viable path to profitability. Even after this whopper of an investment round, OpenAI’s future prosperity is far from assured.

Meanwhile, objective observers are excused from wondering whether the personnel who are leaving OpenAI might be precisely the ones that any genAI startup would need to have on the payroll to succeed in a space where esoteric, and thus rare, expertise can confer sustainable competitive differentiation. 

More episodes of the OpenAI soap opera are undoubtedly in the works. But has show already jumped the shark? 

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe