A Case For a Bursting AI Bubble
Let’s be honest. We all have our enthusiasms, and some of them, by their very nature, are partly or wholly irrational.
Nobody can be objectively rational and logically detached all the time. Some of us can’t manage it only some of the time. Humans aren’t wired that way. We’re not robots, even if robots, gifted with AI brain circuitry, are reputedly destined to serve as our future colleagues or bosses.
Emotions drive us toward action, and emotions aren’t always wrong. Sometimes your instincts are uncannily prescient; they are capable of warning you of something that you know intrinsically to be a threat, even if you haven’t had time to analyze the situation thoroughly and reach a reasoned conclusion. At other times, however — when, for instance, we’re overstimulated or hyperemotional — our sentiments led us astray and into ambushes that we might otherwise avoid.
Balance, as in so many other facets of life, is critical. But balance, between cool thought and mercurial emotion, is hard to achieve and even harder to maintain, especially in the present context.
In our current era of shouty social media and dumbed-down sloganeering, many of us are subject to the questionable mercy of our emotions and feelings. We’re living on a perpetual fault line, waiting anxiously to be enraged, offended, or provoked. Similarly, we’re also desperate for positive affirmation, for a warm digital embrace, for something to believe in, if only until the next thing comes along. We’re not that picky about our beliefs. We just need the next dopamine boost to get us from station to station on our hyperactive journey.
We go from hot to cold, and from cold to hot, with barely a pause between the polarities. You might object that the median between hot and cold is characterized as tepid or lukewarm, not a particularly fond destination. True enough, but it’s also not healthy to be too hot or too cold, or to ricochet relentlessly from cold to hot. Chronic bipolarity is not a healthy state.
Digital Town Squares and Potemkin Villages
In what passes unconvincingly for digital town squares — our burning Potemkin villages of provocation and outrage, populated too often by bots, propagandists, shitposters, and trolls — debates rage, some like controllable burns and others more like rampant brushfires. I have neither the time nor the inclination to enumerate the rotating array of verbal firefights, but I will focus on one that we have covered here on past occasions: the debate as to whether AI market valuations have reached bubble proportions.
In a MarketWatch opinion piece published yesterday (February 12, 2025), Jeffrey Funk and Gary Smith contend that we are trapped inside an AI bubble that is destined to burst. Before we look into the reasoning behind their stark prediction, we might wish to briefly summarize the backgrounds and accomplishments of the article’s authors.
Aside from being a retired professor, Jeffrey Funk is an analyst and consultant. He’s also an author, his latest book titled Unicorns, Hype, and Bubbles: A guide to spotting, avoiding, and exploiting investment bubbles in tech. You might think that anyone who writes a book with that sort of title would have the CV of a confirmed luddite, but that’s where you would be wrong. Funk was an early smartphone proponent in the 90s, predicting their societal and market triumphs. Funk also counseled mobile service providers to focus on apps long before the iPhone was released in 2007.
Funk has previously suggested that the best way to predict the future market success of technologies is to look for and analyze killer applications. If you can’t identify those killer apps, and he argues it’s hard to do so for AI, then the technology will neither reach nor sustain elevated status.
As for Gary Smith, he is a professor of economics at Pomona College and the author of more than 100 academic papers and 17 books, most recently a tome titled The Power of Modern Value Investing: Beyond Indexing, Algos, and Alpha, co-authored with Margaret Smith. His prior books include The AI Delusion, published in 2018, which argues that the primary threat is not that computers are smarter than us, but that we will think computers are smarter than us and therefore trust technologies such as AI to make important decisions on our behalf.
These two authors are not doomsayers or Cassandras — though, as I’ve mentioned before, we should remember than Cassandra’s dark prophecies, though disbelieved, were accurate — but they are skeptical about AI’s potential to reach the grandiose levels of attainment that its advocates confidently present as foregone conclusions.
Buzzkill at the Party
Nevertheless, at the outset of their MarketWatch article, the authors implicitly concede that they are taking a contrarian position that is unlikely to be greeted with hosannas. Nobody likes buzzkill at a party, and the hype surrounding AI is akin to a wild party fueled by coke and grappa. What’s more, the AI bacchanal has raged far longer and more profligately than even an aggregation of the wildest Super Bowl parties. Any attempt to march into this boisterous kegger and counsel perspective and prudence is certain to meet with indifference, if not outright hostility.
Funk and White begin their article by reflecting on presentations that were given at a conference in March of the year 2000. Gary Smith was one of four speakers at an academic conference on the booming U.S. stock market and the widely publicized prediction that the Dow Jones Industrial Average (DJIA) — then sitting below 12,000 — would soar to 36,000. The intoxicating aroma of money was in the air, as this excerpt from the article explains:
The first professor at the conference talked about Moore’s Law. The next professor talked about how smart the dot-com whiz kids were. The third professor talked about Alan Greenspan being a wonderful U.S. Federal Reserve chair. Each time, the audience applauded enthusiastically.
Smith was the final speaker and he was the grumpy outsider at this euphoric event. He politely agreed with everything the three professors said, but noted that none of them had said anything about whether stock prices were too high, too low or just about right. Smith analyzed stock prices from a variety of perspectives — including dividends, profits, revenue and economic value added — and concluded that not only was it farfetched to think that the Dow would reach 36,000 anytime soon, but that the current level of stock prices in March 2000 was much too high. Smith told the audience: “This is a bubble, and it will end badly.” There was a conspicuous absence of applause.
Well, what did Smith expect? He defiantly refused to appease an emotionally and financially invested audience by telling it what it most wanted to hear. Members of that audience likely viewed him not so much as a bearer of bad news — in their euphoria,, they could not conceive of the possibility of a market nosedive — but as a misinformed crank.
It’s also true that nobody likes to hear, “I told you so.” But Smith might have been justified to utter those words mere days later. From the article:
The conference was on Saturday, March 11, 2000. The Nasdaq Composite COMP finished lower on the following Monday. It fell by 75% over the next three years from its March 10, 2000 peak. The takeaway from this? Many intelligent people avoid looking at what really matters for stock prices.
Back in early 2000, most dot-com companies had no profits, so many dot-com enthusiasts looked at their spending — the more the better, as if expenses were revenue. Some looked at the number of people who visited a company’s web page, or the number who stayed for at least three minutes. Even more fanciful was “hits,” the number of files a web page loaded from a server. Incredibly, some people thought these were good reason to buy the dot-com stocks.
Investors didn’t think very much about actual and projected profits. Dot-com stocks were “story stocks,” whose lofty values were due to the magical novelty of the internet and dreams about a “New Economy.” One study found that, on average, companies that simply added “dot-com,” “dot-net” or “internet” to their names nearly doubled the price of their stock.
Clear View from Mount Retrospect
Crazy, right? That was madness, we say today, from our lofty vantage point on Mount Retrospect. The funny thing is, we don’t recognize that some of the same signs of impending doom that were readily apparent back then are now flashing insistently on our navigational consoles as we negotiate the AI hypescape.
AI’s current exponents, as Funk and Smith concede, are compelling storytellers. They’ve been joined by the tech industry’s professional (is mercenary too strong?) raconteurs, content to jump aboard a raucous bandwagon. Everybody loves a great story, a propulsive narrative that quickens our pulses, stirs our passions, and fires our imaginations. Personally, I love imaginative narratives and engrossing stories, but they’re best presented in novels, not in market analyses.
Plenty of money has poured into AI-related infrastructure, with Nvidia as a particularly grateful recipient, but that spending occurs in advance, in expectation of the assumed commercial success of AI services. In turn, sustained commercial success depends on killer apps that have yet to be identified, and might not materialize. The authors of the MarketWatch article stress that no incontrovertible killer app has been identified thus far for genAI, nor even for the more grandiose AGI. Other killer-app candidates, still bubbling in the miasmas of countless imaginariums, might eventually emerge, but there’s no guarantee that they will find sufficient traction to justify the already distended valuations of private AI companies, some of which are already so large and ludicrously valued that they bear only scant similarity to what we once called startups.
There’s no question that AI narrative engage our imaginations. We can imagine scenarios where AI delivers human betterment and we just as easily imagine dystopian scenarios in which AI severely degrades or impoverishes human existence. Trust me, there are people with stacks of money who would buy either side of that proposition: AI conferring societal value or AI facilitating repressive social control. The resolutely amoral money skews toward the latter option. Regardless, imagining an outcome is not the same as bringing it to fruition.
Why AI Narratives Strike a Chord
Perhaps the AI narratives strike such a resonant chord because we’ve all read novels about AI-inflected futures, including AI dystopias. Even non-genre literary authors such as Ian McEwan (see Machines Like Me) have written such novels, and quite a few of them, including McEwan’s, make for excellent reading.
Still, that’s not an investment thesis, unless you’ve investing in screenplays and adaptations.
The article concludes with the following paragraphs:
We have discussed the absence of profits for the so-called unicorn startups and we recently showed that AI companies have far less revenue than did dot-com companies. We estimate that the 2024 revenues for AI companies are in the range of $10 billion to $30 billion. In 2000, internet subscribers paid about $850 billion in 2024 dollars, e-commerce generated about $500 billion in 2024 dollars, and 134 million PCs were sold for about $1.2 billion in 2024 dollars. The revenue of AI companies is 50 times smaller.
We are not the only ones to notice the dearth of AI revenue and profits. Sequoia Capital’s David Cahn, Goldman Sachs’ Jim Covello, and Citadel’s Ken Griffen have all argued that AI’s meager revenue is evidence of a bubble. Indeed, this is a bubble — and it will end badly.
Many of the dotcom darlings crashed when the bubble burst back in 2000, but not all of them went down in blaze of ignominy. Some built businesses, or expanded existing franchises in areas such as computing and mobile devices; they had real customers and generated real revenue and earnings. So far, the numbers suggest that AI will have some utility, broad but not particular deep, more a nice-to-have (sometimes) rather than a must-have technology. That will mean AI will fall short of the extravagant commercial expectations that the frothy market has set for it.
We know from history that Smith was right once, and he might be right again. Not all AI will plunge into an abyss — I don’t think anybody is suggesting that AI does not have some uses — but on its current merits, AI doesn’t warrant its bloated valuations.
Funk and Smith both seem willing to speak at academic and industry conferences, though they probably shouldn’t expect to receive invitations to regale audiences at events sponsored by major AI vendors. Perhaps, if they time it just right and find a receptive academic venue, Funk and Smith might be able to deliver a stark warning before the next bubble bursts.