Goldman Sachs' Intriguingly Ambiguous Assessment of the GenAI Market

“It is my ambition to say in ten sentences what everyone else says in a book— what everyone else does not say in a book.”  – Friedrich Nietzsche 

Nietzsche said some batshit-crazy stuff, particularly after he was stricken by syphilis – or was it brain cancer? Regardless of what afflicted Nietzsche, the quote above, which I first encountered in my youth, is one of his better aphorisms, articulating a noble ambition that might be shared by anybody pursuing concise commentary. 

Brevity is the soul of wit, as Shakespeare said. Today, though, I must admit to witlessness because, dear reader, I’ve gone on a rambling bender, composing a post that hits the scales at more than 4,500 words. Somewhere, Nietzsche’s ghost spins in disgust like a Catherine Wheel.  

Still, I wouldn’t have committed this grievous crime against concision without good reason, or at least good intent. I beg your indulgence on this one occasion. 

Today, we’ll deal not only with an astonishing forecast and commentary on the genAI-related market, belatedly subject to searing blowback from the jeering hoi polloi in the bleachers, but also with the strangely convoluted psychology that is driving the current market mania. Some would call the current AI frenzy a bubble, some are more indulgent of the frantic enthusiasm, but I think most would agree that the current tenor is febrile, perhaps a little unstable. 

Goldman Sachs Takes Inventory on GenAI 

 While reading a recent commentary from Edward Zitron, I learned of a 31-page report form Goldman Sachs, “Gen AI: Too Much Spend, Too Little Benefit?” Before I provide my analysis of the Goldman Sachs document, which actually comprises several interconnected (if occasionally divergent) perspectives, I recommend that you read Edward’s post, which constitutes a powerful antidote to the vigorous hype and propaganda that have driven genAI to the top of the news cycle and kept it there for what seems like a tech eternity. 

As I’ve said before, we always need skeptics, and perhaps we need them more today than we have needed them for quite some time. We live in a paradoxical time: technological advances appear to be accelerating at unprecedented speeds – think of the breadth and depth of technological change you’ve witnessed in your lifetime – but humanity’s ability to adapt intellectually and emotionally to that change is halting and convulsive. How is it possible for a civilization to produce a sustained blitzkrieg of technological innovation while its purported leaders – whether it business, finance, political, and even among the technology glitterati – show themselves to be depressingly regressive and obtuse? It’s a strange dichotomy, one that seems to recur throughout history at times of rapid technological advance. 

Alas, I digress, but I’m not going to apologize. I’ve been reading and rereading a considerable breadth of history tomes recently on the disjunctions between technology advances and the human capacity to readily adjust to scientific and technological lurches, shudders, and surges. These narratives of disconnect generally climax in calamity. We (I’m using the royal “we”) eventually regain a semblance of sanity, but not until considerable damage is done. My hope is that if we can identify and recognize this troubling, recurrent rhyming pattern of history. If we do, perhaps we will have an opportunity to preclude yet another disaster of misunderstanding. 

Anyway, enough. I’ll narrow my focus now to the Goldman Sachs report, which is, as Ed says, a fascinating document. I don’t want to jump to a spoiler, but I see the document, at least partly, as a disclaimer, a cautionary signpost that Goldman Sachs can cite in posterity to say that they judiciously admonished clients and investors to tread warily on the yellow-brick road of genAI. 

There’s no question that genAI, even as it benefits from a seemingly endless wave of full-throated promotion, is also facing a long-overdue backlash from critics and from many who are simply fatigued from exposure to the relentless hype. This is a natural course of events, whereby point is followed by counterpoint, effect by counter-effect, action by reaction, and thesis by antithesis, which sometimes, in a dialectical process, results in an enlightened synthesis. 

But we’re not anywhere close to the sunlit uplands of genAI. A battle has been joined, sometimes passionate and sometimes passionate, and it might not resolve itself in clarity and consensus for quite some time. 

In the meantime, what you and I can do is read and research the available information and progress toward our own understanding, constantly open to review and subject to change as new information and knowledge comes to light. As a modus operandi, it lacks excitement, but it usually yields insight.

GenAI Market as Chiaroscuro Mosaic 

The Goldman Sachs report is an interplay of substantive and occasionally conflicting content. The first section features an overview of the contents within, akin to an executive summary. Then we’re on to what is arguably the main event, a fulcrum for subsequent discussion and debate, which takes the form of an interview with Daron Acemoglu, Institute Professor at MIT, whom Goldman Sachs bills euphemistically as a genAI skeptic. 

Acemoglu is knowledgeable, dear reader, but he is more than skeptical. Whereas the economists at Goldman Sachs project approximately a 9% increase in productivity from genAI in the next decade and a 6.1% increase in GDP, Acemoglu forecasts, spanning the same period, forecasts a less than 1% increase in productivity and only about 1% GDP growth. There’s a chasm between those two forecasts, with the former ascribing great importance and economic heft to genAI and the latter attributing a relative paucity of significance to the technology. 

In the interview, Acemoglu makes his case for restraint. Early in the exchange, he says the following:

Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms. But given the focus and architecture of generative AI technology today, these truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years.
Over this horizon, AI technology will instead primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive. So, estimating the gains in productivity and growth from AI technology on a shorter horizon depends wholly on the number of production processes that the technology will impact and the degree to which this technology increases productivity or reduces costs over this timeframe.
My prior guess, even before looking at the data, was that the number of tasks that AI will impact in the short run would not be massive. Many tasks that humans currently perform, for example in the areas of transportation, manufacturing, mining, etc., are multifaceted and require real-world interaction, which AI won’t be able to materially improve anytime soon. So, the largest impacts of the technology in the coming years will most likely revolve around pure mental tasks, which are non-trivial in number and size but not huge, either.

 These observations, I think, are accurate. Not only does Acemoglu offer an opinion, but he bases it on a clear set of assumptions, a sound methodology, and a carefully vetted quantitative model. The result is a set of probable outcomes that he credibly defends.  

While I agree with his fundamental conclusions about genAI, I would articulate them slightly differently. Zitron, in his take on the Goldman Sachs report, says, “Generative AI at best processes information when it trains on data, but at no point does it "learn" or "understand," because everything it's doing is based on ingesting training data and developing answers based on a mathematical sense or probability rather than any appreciation or comprehension of the material itself.”

That view approximates my understanding of genAI’s basic limitations. GenAI has no consciousness, it has no lived experience, it has no capacity to apprehend and assimilate new information in real time or to correctly interpret the connotations and signals interpersonal exchanges so as to reach accurate, meaningful, or tactful conclusions. Those shortcomings mean that genAI, by definition, is unsuitable for any role that involves significant personal or professional interaction with a human being. Give that some thought. What I’ve just demarcated as out of scope for genAI is a big chunk of what constitutes professional activity in any modern organization, business, or economy. 

Zitron makes a similar point in his commentary: “Generative AI is not going to become AGI, nor will it become the kind of artificial intelligence you've seen in science fiction. Ultra-smart assistants like Jarvis from Iron Man would require a form of consciousness that no technology currently — or may ever — have — which is the ability to both process and understand information flawlessly and make decisions based on experience, which, if I haven't been clear enough, are all entirely distinct things.

What is GenAI Good For? 

In the Goldman Sachs interview, Acemoglu makes the following observation when his interlocutor pushes back on the long-term potential of genAI:

Many people in the industry seem to believe in some sort of scaling law, i.e. that doubling the amount of data and compute capacity will double the capability of AI models. But I would challenge this view in several ways. What does it mean to double AI’s capabilities?
For open-ended tasks like customer service or understanding and summarizing text, no clear metric exists to demonstrate that the output is twice as good. Similarly, what does a doubling of data really mean, and what can it achieve? Including twice as much data from Reddit into the next version of GPT may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representative’s ability to help a customer troubleshoot problems with their video service. The quality of the data also matters, and it’s not clear where more high-quality data will come from and whether it will be easily and cheaply available to AI models. Lastly, the current architecture of AI technology itself may have limitations.
Human cognition involves many types of cognitive processes, sensory inputs, and reasoning capabilities. Large language models (LLMs) today have proven more impressive than many people would have predicted, but a big leap of faith is still required to believe that the architecture of predicting the next word in a sentence will achieve capabilities as smart as HAL 9000 in 2001: A Space Odyssey. It’s all but certain that current AI models won’t achieve anything close to such a feat within the next ten years.

It’s a valid point. If we have more genAI – more processing power, more data – where, and what, does it ultimate get us? The answers seem to be not much, and not that far. GenAI should not be mistaken for a consummate replication of full-fledged human cognition, which would necessarily include an experiential basis for meaningfully assessing and responding to a dynamic, occasionally unpredictable, interaction with a human being. GenAI is not fit for such purpose.  

So, what is genAI good for? I think it will find favor in a relatively small portion of the market where it can operate in an essential closed system where probabilistic outcomes are helpful to accomplishing a task or enhancing employee (human) productivity. As Acemoglu makes clear, that’s not a minuscule market opportunity, but not is it the massive addressable market that genAI’s tup-thumping proponents envision.

The interviewer at one point raises the chimera of genAI’s potential to transmogrify into a “superintelligence,” and Acemoglu dismisses it outright, saying nothing of the sort is likely to materialize in “even a thirty-year horizon, and probably beyond.”

When the costs and drawbacks of AI technology are explored in the interview, Acemoglu suggests that deepfakes, which do not concern him deeply, are probably just a “tip of the iceberg” in relation to how malefactors could misuse genAI. He adds drolly: “And a trillion dollars of investment in deepfakes would add a trillion dollars to GDP, but I don't think most people would be happy about that or benefit from it.”

But genAI security companies would benefit, and presumably so would the VCs that fund them and the investment bankers who serve as their agents in exit scenarios, so some limited good will come of it. 

Reasons to be Cheerful 

The next section of the Goldman Sachs report provides a counterpoint to the cold, wet blanket that Acemoglu has just thrown over the genAI revelers. Joseph Briggs, a senior global economist at Goldman Sachs, presents a more optimistic set of assumptions and a robust forecast for genAI across various industries and market segments. Briggs takes issue with some of Acemoglu’s assumptions, and he defends the integrity of the forecast model Goldman Sachs has architected. 

In having Briggs follow Acemoglu, Goldman Sachs reverses the polarity of  “good cop, bad cop” staging, with the subdued Acemoglu discomfiting the audience with warnings of genAI underperformance and Briggs countering with reasons to be cheerful, as Ian Drury and the Blockheads would say back in the late 70s

Just when you think it’s safe to jump back into the genAI market, however, Jim Covello, Goldman Sach’s head of Global Equity Research takes the stage and declaims darkly that genAI might never solve complex problems, which it must solve if it has any hope of providing a return on investment given its prodigious costs. Quoting Covello:

We estimate that the AI infrastructure buildout will cost over $1tn in the next several years alone, which includes spending on data centers, utilities, and applications. So, the crucial question is: What $1tn problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my thirty years of closely following the tech industry.
Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

 Getting some pushback from his interviewer, who posits that technology costs often decline dramatically as technology evolves, Covello replies that such a notion is “revisionist history," arguing that "e-commerce, as we just discussed, was cheaper from day one, not ten years down the road. But even beyond that misconception, the tech world is too complacent in its assumption that AI costs will decline substantially over time.”

Multifaceted Doubt

 Covello’s skepticism about genAI goes beyond its eye-popping costs, partly represented by premium pricing of chips from the likes of Nvidia. Going against the current narrative grain, Covello says that AI might fall well short of the achievements and significance of technological inventions such as the internet, cell phones, even laptop computers. This is not what a lot of investors might want to hear, but Covello presses his point, saying that genAI shows promise in making existing processes – he cites coding – more efficient; but even conceding that point, he says the costs of utilizing genAI to solve such problems is “higher than existing methods.” He even references the direct, relatively unhappy experience of Goldman Sachs, which found that “AI can update historical data in our company models more quickly than doing so manually, but at six times the cost.”

Like Acemoglu, Covello says he struggles to “believe that the technology will ever achieve the cognitive reasoning required to substantially augment or replace human interactions.” Bingo! We have bingo! We might say it in different ways, but it amounts to the same seemingly unsurmountable stumbling block for genAI. 

So, you might think that’s the end of the story, at least in relation to Covello’s bracing narrative, but you would be wrong. This is where things take a turn for the cynical. While Covello places low odds on the prospect of AI-related revenue growth at non-tech companies, that doesn’t mean he doesn’t see AI-related investment opportunities that can benefit savvy punters. 

Covello recognizes that big technology companies, including the cloud giants, must engage in an AI “arms race” because of market hype and fear of missing out (yes, FOMO strikes again). Hyperscalers and other large players have invested heavily in AI-related IT infrastructure, and Covello believes they will continue to do so until investors call time on their extravagance. When will that happen? Well, it might take a while. 

Covello says the following:

These companies have indeed already run up substantially, but history suggests that an expensive valuation alone won’t stop a company’s stock price from rising further if the fundamentals that made the company expensive in the first place remain intact. I’ve never seen a stock decline only because it’s expensive—a deterioration in fundamentals is almost always the culprit, and only then does valuation come into play.

I would argue that traditional fundamentals aren’t playing a material role in the appreciation of AI-gilded tech stocks. Instead, what’s driving the stratospheric share prices and valuations is an unshakeable expectation – we could arguably call it a conviction – that AI is the proverbial goose that lays diamond-encrusted golden eggs. There is, at this early stage, very little hard evidence that the tech giants’ investments in genAI are producing compelling returns. What we have instead is an expectation that spectacular returns are forthcoming – at some point.

Entranced investors believe, as a child might believe in the Easter Bunny or cultist might believe in the mystical powers of a cult leader, and it will take a mighty jolt of reality to shake such near-religious belief. The exuberance truly is irrational, and logic and reason are futile forces in a distorted, magic-mirror funhouse where fantasy reigns. 

As we saw in my previous post about the rising stock prices and valuations of the Magnificent Seven, the market suspends disbelief for as long as a dominant narrative drives momentum and the trading algorithms upward. When a new high is breached, the algorithms take the milestone as an auspicious indicator of further appreciation. One intoxicating high follows another until the salient market narrative loses its reality-distorting powers, at which point investors begin to abandon ship (and hope) and the algorithms begin taking the stocks back down to earth. 

Covello believes a reckoning will occur, but not just yet. He says:

Over-building things the world doesn’t have use for, or is not ready for, typically ends badly. The NASDAQ declined around 70% between the highs of the dot-com boom and the founding of Uber. The bursting of today’s AI bubble may not prove as problematic as the bursting of the dot-com bubble simply because many companies spending money today are better capitalized than the companies spending money back then. But if AI technology ends up having fewer use cases and lower adoption than consensus currently expects, it’s hard to imagine that won’t be problematic for many companies spending on the technology today.
That said, one of the most important lessons I've learned over the past three decades is that bubbles can take a long time to burst. That’s why I recommend remaining invested in AI infrastructure providers. If my skeptical view proves incorrect, these companies will continue to benefit. But even if I’m right, at least they will have generated substantial revenue from the theme that may better position them to adapt and evolve.

Enchanted and Enthralled 

The reasoning might be convoluted, and even a little cynical, but that that doesn’t mean it’s wrong factually. GenAI is overhyped, Covello suggests, but that doesn’t make it a bad investment. Why fight the trend? Couldn’t it all turn ugly at some point? Sure, but investors just have to ride wave until they see signs of implosion. What might those portents be? Covello again:

How long investors will remain satisfied with the mantra that “if you build it, they will come” remains an open question. The more time that passes without significant AI applications, the more challenging the AI story will become.
And my guess is that if important use cases don’t start to become more apparent in the next 12-18 months, investor enthusiasm may begin to fade. But the more important area to watch is corporate profitability. Sustained corporate profitability will allow sustained experimentation with negative ROI projects. As long as corporate profits remain robust, these experiments will keep running. So, I don’t expect companies to scale back spending on AI infrastructure and strategies until we enter a tougher part of the economic cycle, which we don’t expect anytime soon. That said, spending on these experiments will likely be the one of the first things to go if and when corporate profitability starts to decline.

As long as the story enchants, the enchanted will remain enthralled. At some point, however, as AI’s boosters keep promising a bright new tomorrow that never comes, investors will begin to doubt. Even if they doubt, though, they might be dissuaded from hitting the “sell” button as long as hyperscalers keep investing in AI infrastructure. If hyperscalers pull back on that investment, perhaps because of a sudden onset of inclement macroeconomic conditions, woe betide those who miss the traffic signal turning from green to amber and then to red. 

Covello, perhaps because his job involves overseeing equity research, offers a more ambiguous takedown of genAI than the coolly dismissive assessment that Acemoglu articulated. Covello basically says that genAI might have a low ceiling – duck when you enter – but that doesn’t mean investors can’t make money on it while the tide under the vessel is rising. I understand his reasoning, and it’s hard to quibble with it theoretically, but I think there are safer ways to make money on the markets than timing the top of Nvidia, which has already climbed to nosebleed elevations and rewarded its long-term investors enormously.

The point-counterpoint, yin-yang cadence of the Goldman Sach report continues, with the next section showcasing a discussion with two of the company’s senior equity research analysts, who remain fervently optimist about genAI’s market prospects and excited about its long-term potential. They generally believe that the hyperscalers will remain committed to the venture – and will continue to have the wherewithal to support it – until the technology grows up and does something useful, like a prodigal child who finally gets serious about professional pursuits after a decade of bacchanalia and frivolity in the world’s fleshpots.  

Slaking the Beast's Thirst  

After that, the report features an interview with Brian Janous, a cofounder of Cloverleaf Infrastructure, which works with utilities to unlock new grid capacity, an urgent requirement given the voracious energy consumption of datacenters running genAI.  Janous contends that AI is contributing materially to an acute need for electrical-grid expansion, which is neither a trivial nor a speedy undertaking. The regulatory regime under which utilities operate means that the permitting and approvals processes take time, often a long time, before construction of new capacity can even begin. He says the technology-industry firmament, led by the hyperscalers and Nvidia, increasingly recognize that electricity is an important commodity and that they are focused on addressing growing power constraints. But recognizing the problem is one thing, and solving it is a much bigger task. 

That interview is followed by complementary data and observations from Carly Davenport, senor U.S. utilities equity research analyst at Goldman Sachs, who is interviewed regarding the electric requirements of AI technology and datacenters. Davenport says that the implied share of total U.S. power demand attributable to datacenters will grow from about 3% currently to 8% by 2030, predicated on a 15% CAGR in datacenter power demand from 2023 to 2030. 

A discussion follows on where power generation can be found to meet that growth. Davenport estimates a future generating capacity split of 60/40 between natural gas and renewables. Nuclear energy could be a factor, Davenport offers, but not from utilities. Instead, Davenport posits that datacenter owners and operators could strike deals with companies that provide and operate unregulated nuclear plants. 

In considering the challenges faced by utilities striving to meet rising capacity demand, Davenport amplifies and echoes points addressed by Janous. Unlike Janous, but as you would expect from an equity research analyst, Davenport identifies prospective beneficiaries of increasing energy demand. 

AI-related energy prospects in Europe are then addressed by Alberto Gandolfi, who is Goldman Sachs’ head of European Utilities Equity Research, while AI chip constraints are discussed by Goldman Sachs’ semiconductor research analysts. Finally, before a summary of relevant forecasts, readers are treated to a relatively bullish near- to -intermediate-term prognosis for AI-related U.S. equities, followed by an assessment suggesting that, given the optimism baked into the stock prices and valuations of presumed AI leaders, an extremely favorable scenario would be required for the S&P 500 to deliver above-average returns on investment in the coming decade. So, the picks and shovels are selling, but what gets constructed with them might not be as commercially lucrative.

What It All Means 

Taken collectively, what does it all mean? What is this report from Goldman Sachs telling us? What, exactly, is its purpose? 

My first impression, after giving the document a thorough perusal, was that it represented a Rorschach test. Objectively, the content is the same for every reader, but each reader will interpret it differently, placing greater emphasis on sections with which they agree and downplaying or dismissing sections that run counter to their views. A selective confirmation bias will rule, as it often does. 

If so, it’s a finely balanced masterstroke by Goldman Sachs, which can’t lose. If, after the fact, the genAI market comes a cropper and underperforms dismally on the public and private markets, Goldman Sachs will be able to say that it foresaw the risks and counseled caution. Conversely, if the market performs up to or better than Goldman Sachs’ forecasts, the firm will say that its bullishness was warranted by fundamentals and vindicated by performance. It’s a fine line to walk, but this report manages to accomplish the objective like Charles Blondin striding across a tightrope over the Niagara Gorge.  

Boom or bust, or anything between those polarities, the genAI market will have met a fate, under a particular set of circumstances, explored by Goldman Sachs in this research report. 

We also learn that it’s still too early to call a definitive outcome for the genAI market, though everybody is rushing to place a theoretical (as in no money at risk) wager. Well, I suppose some money, and a lot of it, is already on the line, given that new investors are piling into the Magnificent Seven, and particularly Nvidia, every single trading day. 

Ultimately, the call from Goldman Sachs is that investors’ money wagered on Nvidia today, even at sky-high current prices, is not a bad play. Further out, though, it seems that many at Goldman Sachs have reason to believe that genAI prospects, beyond the picks-and-shovels boom sustained principally by the steadfast commitment and massive expenditures of hyperscalers, will last as long as macroeconomic conditions, hyperscaler profitability, and the market’s resilient suspension of disbelief permit. 

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe