Observations on OpenAI, Fractured Identities, and Misplaced Idealism
It’s Not a Book Review, but Books Provided the Inspiration
What follows is not a book review. I suppose it qualifies as a free-range commentary inspired by two books I read recently. Both books take OpenAI as the subject, with Sam Altman shifting in and out of focus as chief protagonist and occasional antagonist.
I read the books in relatively quick succession, first Karen Hao’s Empire of AI, followed by Keach Hagey’s The Optimist. For the record, the complete title of the latter is The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. For reasons I will explore shortly, I can see why Hagey would need to expand her focus beyond Altman. It’s the right thing to do, anyway, since no single person, in any company, is wholly responsible for its success, failure, or middling indifference.
The two books cover some of the same terrain, but they differ in perspective and range. For the record, I heartily recommend reading both books, presuming, of course, that you have an interest in the subject.
Hao takes a broader view, assessing not only the business and personal narratives associated with Sam Altman and OpenI, but the global significance of AI. She devotes sections of the book to the exploitation of workers in what is now called the Global South (an inexact term that I invoke with ambivalence) and to the environmental consequences of building massive, energy-sucking datacenters to accommodate the technology. The section that deals with the exploitation of workers involves the grimier aspects of reinforcement learning from human feedback (RLHF). When you’re next using ChatGPT, you might want to give some thought to the folks who were psychologically scarred and economically abused as contributors to the quality-control process.
While I found Hao’s narrative compelling, she uses a wide aperture, so much so that I thought Empire of AI could easily have been cleaved into two books rather than one. That’s no more than a cavil, however. Hao crafts a compelling narrative, starting with high drama and then moving back and forth in time to explain how the situation unfolded and what came afterward.
Hagey, who works for the Wall Street Journal, is a little more conventional in her approach, though equally substantive. Although Altman is framed as the optimist of the book’s title, Hagey’s narrative also encompasses OpenAI, including the company’s other principal players. The book actually includes a cavalcade of colorful characters: venture capitalists, various luminaries in the AI firmament, family members (mostly from Altman’s family), consultants, think-tank types, academics, and members of the Effective Altruism community (often academics themselves). I’ll return to the Effective Altruists later.
The Flipping Blip
As I noted, there are some unavoidable overlaps in the narratives of these two books. Both deal prominently with what OpenAI employees call “the blip,” the brief episode that saw Altman ousted as the company’s CEO before being reinstated days later.

From the two books, you get slightly different perspectives on what happened during “the blip” and why, but it’s essentially the same story. There will always be shades of Kurosawa’s Rashomon in any story told from the vantage points of different participants and witnesses. The tale of Altman’s short-lived expulsion from his perch atop OpenAI is refracted through the eyes and ears of those who subjectively bore witness to the saga. One thing that comes through in the accounts of both books is that Altman was largely hoisted on his own petard, though the board and others, including former CTO Mira Murati, did not exactly cover themselves in glory.
Perhaps many of the principals were in over their heads. They were young people, without much business experience, many vaulted from research communities into the high-stakes bloodsport of Silicon Valley, where appearances (and people) can be deceiving.
OpenAI’s evolution from self-proclaimed non-profit conscience of artificial intelligence to profit-driven market leader left many of its employees disoriented. In hindsight, the dizzying trajectory was predicable well before the inevitable bust-up occurred. Many company founders and principals ultimately scattered in different directions — some to competitor Anthropic — like a defunct rock band riven by artistic differences and simmering resentments. Altman evolved, too, perhaps in ways he wouldn’t have predicted, though he remains, even after one has read both books, an inscrutable character.
Indeed, who is Sam Altman? I’ve read two books that feature the man as a prominent presence, and I still don’t think I have insight into his fundamental character. I’m not even sure he has a fundamental character. Altman is a bit like an entrepreneurial answer to the cinematic chameleon Zelig. He’s endlessly protean and malleable, shape-shifting and adapting to circumstances, environments, and situations. You can see why he’s succeeded in the inherently complex, increasingly fraught technology industry.
In both books, Altman frequently tells people what they want to hear rather than what they need to hear. He is skilled at dissimulation, as hard to read as a professional poker player. He is good at interacting with investors, which is partly why he has scaled the heights of the organizations he has joined. Altman is equally adept at getting others to do his bidding, not unlike Elon Musk (who has a supporting role in both books). Despite all of that, the man who indistinctly emerges from the pages of these books remains an enigma.
Fractured Identities
Each of us has different personae. We have our professional self, and we have our personal self. We compartmentalize these personae in our lives. Altman, however, seems to have more than two personae. He harbors multitudes, which is why some people eventually come to distrust him.
Altman’s self seems a fractured mirror of various shards that don’t fit together in a seamless image. Perhaps he’s not alone in that regard. Maybe now, more than at any other time in history, our individual identities are not only compartmentalized but highly fragmented.
Has the information age, as a result of the advances of the industry in which in we have spent our careers, fractured our personalities to the point of dissociation? We now have our work persona, our home persona, perhaps a slightly different persona we expose to longtime friends, and — on top of everything else — we have at least one or more online personae.
Today you can have multiple pseudonymous personae, which you can wear like new clothes as you traverse various online communities, including social media, gaming sites, and virtual worlds. I wonder what that does to our psyches, whether it makes us more performative and perhaps less stable than we might have been if we’d lived in earlier periods of history. We are awash in selfies and digital influencers, in a relentless celebrity culture where nearly everybody feels the compulsion to do a star turn.
Even tech executives are celebrities with legions of online followers. Through a certain lens, Elon Musk is a Kardashian without the cosmetic surgery, though he seems to have had a hair transplant that Wayne Rooney would envy.
Delusions of Academe
I want to say something about Effective Altruism, particularly about a subset of the philosophy called Earning to Give. Many of the characters in the story of OpenAI adhered to the tenets of Effective Altruism, though some disavowed the creed after the unseemly scandal involving Sam Bankman Fried, at one time a prominent EA enthusiast.
EA initially arose from academia, particularly from the study of philosophy. It had noble objectives, but, as we find in too many instances, the road to hell is paved with good intentions.
Let’s discuss the Earning to Give tenet, which has a congenital fault that perhaps only academics could miss. The Wikipedia definition of the term is serviceable enough:
“Earning to give involves deliberately pursuing a high-earning career for the purpose of donating a significant portion of earned income, typically because of a desire to do effective altruism. Advocates of earning to give contend that maximizing the amount one can donate to charity is an important consideration for individuals when deciding what career to pursue.”
It’s that last bit, about how adherents should assess their career opportunities, that I wish to scrutinize. In theory, somebody with altruistic motives might choose the most lucrative career path, with a view to giving later, after they’ve accrued copious riches. They’d take this career path rather than deciding to do doing something that appeals to them or that they genuinely enjoy. The catch is, once you enter the village, you’re at risk of becoming one of the villagers.
The Earning to Give principle could only have been advanced by academics, a class of people who, whatever their other merits, are not exposed to the rough and tumble environs of for-profit capitalism. Let’s enumerate a couple of ways in which Earning to Give can be gamed or inadvertently subverted.
It can be gamed by people who want to cloak themselves in selflessness and altruism, as a public-relations exercise to give the false impression that they are not in any way greedy and narcissistic. Devious people would love this sort of cover, which allows them to claim the high ground while making money in low places. (A variation of the final clause of that last sentence — Claiming the High Ground While Making Money in Low Places — sounds like the title of a country-western song.)
Another Silicon Valley Production
Even among the true EA believers, who start out with noble objectives, it can all go wrong. You can adopt Earning to Give with earnest intent, a serious commitment to see it through to philanthropic fruition, yet find yourself at an unholy terminus.
When you take up a professional, you become immersed in its culture, its milieu. You can’t avoid it. In time, you soak up and assimilate the attitudes, beliefs, prejudices, and self-justifications of other people in your professional sphere. It’s like the Stockholm Syndrome, but without an armed robbery and hostage taking.
What you do professionally seeps into your identity. It happens unconsciously, imperceptibly, but it happens. As the years pass, and we know they seem to pass at greater speed as we age, you don’t even notice that you’ve become something other than what you thought you were. In the purple patch of your career, you begin to look back on your youthful idealism as impractical, perhaps even undesirable. People are surprisingly resourceful in their capacity to rationalize even the narrowest self-interest.
Look, I don’t have an ology or an ism to sell. I don’t really trust them, and I don’t take myself all that seriously. These ologies and isms usually ask more of people than they’re capable of delivering. I don’t know what to tell you, but I do know that any improvement will come from an awareness of our strengths and our limitations.
EA proponents, with their delusional Earning to Give precept, completely overlook the coercive force of social conformity, to which we are all subject to one degree or another. When you join a culture, you become, whether you like or not — and whether you know it or not — part of that culture. It happens imperceptibly and insidiously, but that doesn’t make the result any less real.
In key respects, what happened at OpenAI also confounded the aspirations and expectations of those who joined the organization to make AI safer for humankind. They ultimately found themselves in another Silicon Valley production, driven by fear, greed, and the relentless profit motive.