The Citrini Moment: Not a Fashionable Cocktail, but a Provocative AI Scenario
Between forecasts and fiction
By now, you have likely heard about or even read the provocative Substack post from a firm called Citrini Research. I had no prior knowledge of Citrini Research, and I suspect many others were similarly unfamiliar with the firm until the celebrated post appeared.
Citrini sounds like a cocktail, perhaps a cross between a lemon sour and a martini. The word “citrini” derives from citron, French for lemon.
Anyway, Citrini Research’s boilerplate says that the firm “provides insights on thematic equity investing and global macro trading—with cross-asset, lateral thinking. Our promise: you’ll never have to ask “what’s the trade?”
So, the free post, titled THE 2028 GLOBAL INTELLIGENCE CRISIS (I’m replicating the all-caps formatting used by Citrini), appears designed as a loss-leader to induce readers to subscribe to Citrini’s newsletter, which purports to offer guidance on how to trade your way to prosperity during the intelligence crisis.
What Citrini’s post describes is an elaborate worst-case scenario involving the far-reaching deleterious effects of AI on nearly every industry and profession spanning the economy. The scenario isn’t presented hypothetically, but as a form of metafiction, in which future news headlines from Bloomberg and other media companies are imagined and presented as if they have occurred. Such narrative embellishment is a staple of science fiction (as well as other genres).
The author or authors admit that the post depicts only a speculative scenario, but their narrative is unfurled artfully and persuasively. Faux news headlines and imagined dystopian metrics give the impression of a future foretold. It’s a scenario, sure, but the authors want you to believe that it’s a probability if not an inevitability.
Point and Vigorous Counterpoint
I don’t have an objection to this sort of thing, though a vituperative protest can be found in a blistering rejoinder issued by Edward Zitron (not Citron or Citron Research, which apparently does exist), who pulls no punches in taking Citrini to task for what he characterizes as “slop-filled scare-fiction.”
There’s no question that Citrini is indulging in shock treatment, jolting its readers (and prospective subscribers) into a sense of febrile urgency. The post is one lengthy, fearful call to action. As I said, though, I don’t dismiss it out of hand, and I should explain why I’ve espoused that view.
Together with my colleagues, I have researched, written, and published perhaps hundreds of market forecasts. We invariably began the process with historical data, incontrovertible numbers that ware already in the books. Hindsight is 20/20, and we have a reasonably firm grasp on what happened qualitatively and quantitively if we have monitored, tracked, and recorded events accurately.
From that historical base, we consider the current circumstances, factors, and variables and play them forward, positing assumptions that are combined with a rigorous methodology. Part of that methodology involves assessing various scenarios, of which they are usually at least three: a best-case scenario (the most optimistic), a moderate or goldilocks scenario (the most reasonable option, all things considered), and a worst-case scenario (the most pessimistic option, with the grimmest implications). One rarely opts for the best- or worst-case scenario when preparing a forecast, though macroeconomic crises or profound technological shifts might cause a measured recalibration toward one extreme more than the other.
What Citrini has published is not a forecast, and, as we’ve established, they readily concede that point. They have presented a stark scenario, dystopian in nearly every respect. I say nearly every respect because the AI behemoths, including major infrastructure suppliers and AI model purveyors, clearly thrive in Citrini’s narrative, at least over a limited or intermediate horizon, which is usually as far as these scenarios can extend.
Eventually, though, as the rest of the economy, including consumer spending, is gutted and vitiated, everybody suffers, including the AI kingpins. AI, it turns out, relies on a robust economic foundation predicated on human prosperity. Even the latter is weakened irreparably, the house of virtual cards collapses.
Useful Thought Experiments
Citrini’s scenario provides a useful thought experiment. It provides an impetus for us to think both imaginatively and critically about how the future might unfold. We should always remember, however, that the future, in all its technicolor nuances, is beyond the reach of our predictive powers. At best, we’re right about the direction of events. We throw our prognosticatory haymakers, but we know we’ll be fortunate if they make glancing contact with the future’s head or shoulders. Landing a blow on the nose is too much to expect. This was as true for Nostradamus and Baba Vanga as it is today for Athos Salomé. There are some things we cannot know until they happen, and even then, we might not understand them until we gain further distance and perspective.
But I want to end this post on a positive note, punctuated by a helpful observation.
As I thought about the Citrini piece and Ed Zitron’s furious refutation, I postulated that the future holds out two broad possibilities. In one, AI goes from strength to strength, lifting the stock prices and valuations of AI companies to an extent that surpasses even the most optimistic forecasts of smitten Wall Street analysts. In this imagining, AI eventually attains a degree of intelligent automation (I can’t call it “superintelligence,” for reasons we’ve discussed previously) that leads to pervasive job displacement across the professional landscape, prompting a revolutionary restructuring of industries, economies, polities, and human lifestyles. Consequently, AI wins, at least for a while, until it goes too far and creates an untenable wealth disparity.
Another possibility, necessarily fuzzy, is that an AI bubble bursts, and the technology reveals itself as less than the much-hyped sum of its parts. In this narrative, AI proves useful in a limited number of applications and roles, but it isn’t the intelligently automated silver bullet that neutralizes and eliminates jobs and professions across the breadth and depth of the universal economic landscape. That’s a problem for AI, however, because it has been marketed and sold as nothing less than an epochal technological revolution. If AI falls short of those lofty heights, it will be perceived as a major disappointment. In the public markets, share prices would fall precipitously, many companies would fail outright, including many marquee vendors of frontier models (such as OpenAI), and private credit would implode spectacularly. (This latter scenario, the implosion of private credit at scale, might already be happening, so I can’t say that I’ve taxed my imagination beyond the breaking point.)
These two potentialities, by the way, are not mutually exclusive, at least not in the fullness of time. I suspect, but cannot know, that the reality we eventually find will be somewhere between those polarities, borrowing narrative strands from each.
Regardless, we should welcome a reasoned, vigorous debate about where things are heading and what might happen when we get there. If nothing else, it demonstrates that we have yet to relinquish critical thought to the machines.