And Now for A Very Different Take on the DeepSeek Cause Célèbre

A media storm raged in the wake of news that China’s DeepSeek open-source AI models were capable of equaling or surpassing the performance of OpenAI’s ChatGPT, which is decidedly not open source.

In the past few days, arguably too much has been written and said about the emergence of DeepSeek in every form of media available to the commentariat. In fact, you might think that nothing else can be said or written on the timely (but for how long?) topic. But that’s where you would be wrong, dear reader, because I am belatedly crashing the party with a burnt offering of my own, striding into the festivities with a savory, if modest, appetizer to sustain the feeding frenzy.

Is it necessary for you to have my perspective? Is there a irrepressible demand for my voice to be added to the shrill cacophony? The answers are no and no — but I’m going to do it anyway.

So, yes, I have some thoughts on Deep Seek and its potential ramifications. Some of these thoughts are probably versions of perspectives that have already been voiced, but, here and there, I might break some new ground, or at least some ground that has been lightly traveled.

I will assume at the outset that you’re reasonably familair with the current media narrative regarding DeepSeek. As with any other incipient development, the story is likely far from over. There’s risk, and little reward, in jumping into a fast-flowing current, but safely watching the world pass from the banks of the river isn’t much fun either.

If the DeepSeek assumed book form, we’d be in the early chapters. You could close the book now, if you chose to do so, but you’d be choosing ignorance over enlightenment. I don’t mean that as a disparagement, by the way, because plenty of people these days are comfortable with the apparent certainties that ignorance offers.

Market Carnage, Boardroom Chaos

As for DeepSeek, when its open-source models were released and adjudged by many AI cognoscenti to be as efficacious as those from OpenAI, all hell broke loose on the stock market and dread invaded countless boardrooms. It was little more than a week ago that DeepSeek launched its latest open-source AI model, R1, soon reported to perform similarly to OpenAI’s GPT-4o reasoning model but at a much lower cost.

AI Thinks This Represents "Boardroom Chaos" -- Strange, no women, and the men clearly haven't been inhibited by DEI policies

On Monday, after the news made its way around the world, U.S. stocks suffered a $1-trillion loss, with technology issues taking the heaviest hits. The market mayhem was aggravated by DeepSeek’s claim that it trained it models not only at a faction of the cost associated with OpenAI’s training regimen, but also that did so, by necessity rather than by choice, without access to Nvidia’s latest and greatest GPUs.

And there was more bad news, as this excerpt from a MarketWatch article attests:

A smartphone application for an AI chatbot designed by the company (DeepSeek) became the most downloaded app in the Apple app store over the weekend.
This helped to sow doubts about whether U.S. tech companies’ aggressive spending on AI-related chips and infrastructure was really necessary. This second-guessing appeared to hit Nvidia the hardest.

Some observers remarked that the DeepSeek drama materialized from nowhere, but that’s not true. DeepSeek came from China, where the state and its companies have revisited the old adage about necessity serving as the mother of invention. Increasingly subject to export controls that prevent them from obtaining and deploying Nvidia’s most advanced GPUs, Chinese AI companies, including DeepSeek, were forced to explore other means of achieving AI model efficiencies. At first glance, and we’re early in this story (as I noted earlier), DeepSeek appears to have found ways to do more with less. That used to be a popular credo in Silicon Valley, but now it seems only to apply as a boardroom rationalization for layoffs and restructurings.

There’s a debate now as to whether DeepSeek might have engaged in a form of distillation that OpenAI views as tantamount to pilfer and plunder. Some wags — and I will throw my lot in with them here — observe that it’s farcical, and boldly hypocritical, for OpenAI to make such an allegation given its own rapacious consumption of data and intellectual property that belongs to others. Let’s put that issue to one side for now, if only because I believe it might be subject to potentially interminable disputes involving highly remunerated lawyers and truculent trade officials.

Some commentators, seeing what DeepSeek has achieved with more efficient use of hardware and leaner software models, cite its accomplishments as another triumph for open-source software projects. We’ll know more about the efficacy and viability of DeepSeek and other open-source AI offerings in relatively short order. I’m sure they’ll be exposed to extensive trial by fire in the days, weeks, and months to follow. Before long, we’ll have a definitive verdict, at least with regard to whether open-source provides more bang for the proverbial buck than the closed and proprietary models that were heralded as the anointed ones, recipients of unprecedented infusions of venture capital and equally unprecedented patronage from cloud titans such as Microsoft and Google.

The cloud titans, of course, aren’t religious about or particularly loyal to any one AI model, proprietary or open source. If they find a way to create the same output at greater efficiency and lower cost, they’ll willingly embrace it. Yes, Microsoft has invested heavily in OpenAI, but OpenAI is far more threatened than Microsoft by the implications of DeepSeek as an open-source exemplar. If OpenAI is revealed as a decadent and extravagant spendthrift, consuming resources prodigiously without providing appreciably better outcomes than open-source alternatives, OpenAI will either have to mend its profligate ways or face an unceremonious exile from the verdant garden of never-ending money.

Why Jevons Paradox Might Not Apply

Meanwhile, Microsoft CEO Satya Nadella has attempted to mitigate the blast radius of DeepSeek developments by citing Jevons paradox. We can argue about whether the argument actually applies or whether it’s clever sophistry, but his intent is to reframe ostensibly bad news as news that is actually good.

The eponymous Jevons paradox originates from the dismal science of economics and is attributable to William Stanley Jevons, who lived in 19th century England. When he coined the paradox that bears his name, Jevons was thinking about coal rather than AI. Nadella, however, is not concerned with coal — not even, we hope, as a means of powering AI datacenters — but is instead deeply invested in the success of AI. As such, Nadella reached for Jevons paradox to buttress the argument that the achievements of DeepSeek, even if taken at face value, auger positively for AI and the tech-industry titans who have poured GDP-level riches into the technology.

Jevons paradox, which has also been called the Jevons effect, is said to occur, pursuant to its Wikipedia entry, when “ technological advancements make a resource more efficient to use (thereby reducing the amount needed for a single application); however, as the cost of using the resource drops, overall demand increases causing total resource consumption to rise. Governments have typically expected efficiency gains to lower resource consumption, rather than anticipating possible increases due to the Jevons paradox.”

Coming from Nadella, the invocation of Jevons paradox serves as a defense against charges of extravagance against Microsoft. After all, Microsoft and others are of their volition spending hundreds of billions of dollars on the construction of datacenters, as well as on the servers, GPUs, networks, and other infrastructure destined to reside within them. That is to say nothing of the not inconsiderable expenses associated with powering those datacenters with copious amounts of electricity. Nadella, it seems, is arguing that the greater efficiencies represented by whatever DeepSeek is doing will ultimately and paradoxically result in higher resource consumption due to increased demand from the invisible hands of the market.

Maybe Nadella has a point — and who doesn’t love a paradox? — but allow me to offer the demurral that a remorseless and inevitable demand for AI services, unlike that of coal in the 19th century, is not certain to materialize. Do we have a killer app for AI yet? Has AI necessarily proven that it’s the digital Swiss Army knife that will unlock riches from a cornucopia of applications and vertical markets? The jury, if I may count myself as a jurist, is still out.

AI has a measure of value, sure, but it’s not like coal in the 19th century or oil in the 20th century. It’s not a fossil fuel or a natural resource that is necessary for the very sustenance of modern life. At this point, AI is something that one might want rather than something that one needs — and sometimes it’s not even that. Perhaps we can’t even trust it . . . but more on that later.

Aside from Nadella’s musings about Jevons paradox, the garlanded executives at Nvidia have also made the case that the emergence of DeepSeek should not necessarily be seen as a dark portent for their company’s prospects. Nvidia’s arguments are strenuous, but it’s not clear at this point whether they’re persuasive.

DeepSeek, subject to U.S. technology export controls that restrict the range of GPUs that Nvidia can provide to Chinese buyers, reportedly used smaller numbers of less-powerful Nvidia GPUs than are utilized by the likes of OpenAI and the hyperscalers who run AI on their cloud infrastructure.

Debate rages about what kind and how many GPUs DeepSeek actually used to train its model, but I think that’s missing a larger point: U.S. export controls are likely to get more austere and exacting, and China and Chinese companies are fully cognizant that they’ll have to continue to do more with less or develop their own GPUs that can be sourced domestically.

At the same time, U.S. AI providers, including all the hyperscalers, will monitor Chinese technological developments closely. If China can learn from the U.S., the reverse is equally true; the American hyperscalers will ensure that they use similar, if not better, efficiency metrics. We already know that are trying to lessen their reliance on Nvidia by designing and developing their own GPUs and AI accelerators.

In the long run, Nvidia is likely to hit a wall, both in China and closer to home. As for the immediate future, Nvidia is likely to continue to spin from money from the market’s loom, but the days of easy pickings are coming to an end.

Beyond Geopolitics, an Implied Threat

Now that we’ve touched on geopolitics, I suppose I should say a few more words on the topic. You can’t get away from geopolitics — not now, not in these days.

The first thing I’ll say is that DeepSeek, as an open-source project of Chinese provenance, might qualify as a clever and hugely destructive means of sideswiping and undermining the U.S. technology industry by obliterating, if only temporarily, the share prices and valuations of public and private companies in the AI industry. All of a sudden, all the verities that observers and market makers took for granted have been exploded and shattered. Up is now down, dark is now light. As such, market perceptions of AI value might have to be reconstructed.

Let’s move on to something else, though.

I spoke briefly of hypocrisy before I mentioned paradoxes, but now we’ll return to hypocrisy. Do you remember, not that long ago, when all the demigods of the tech firmament, including Silicon Valley éminence grise Mark Andreessen, touted AI as a benevolent superpower that would serve as your digital life coach or trusted consigliere in both your professional and personal lives? AI could be trusted, they told us, to give you wise counsel and to help you to become a better, healthier, more productive human being.

That’s what they said, but now they’re saying Chinese AI, far from being a benevolent entity, is a threat to personal safety and national security. Rather than helping you with your homework or your business plan, Chinese AI, as represented by DeepSeek, can allegedly engage in contrived and programmed discussions that perniciously influence users’ thought process and personal decisions.

Further, DeepSeek might share users’ confidential thoughts, perspectives, questions, and opinions with the Chinese government. This line of argument suggests that users who consult and patronize AI bots might not have the acumen or knowledge to know if or when they are being deceived, influenced, or manipulated.

Some have asserted that LLMs from adversarial nations, such as China, might be capable of insidiously infiltrating personal lives and professional realms to amplify and disseminate harmful propaganda. As such, these AI models from China, so the argument goes, are a significant threat — all the more so because the menace is surreptitious — to what remains of the social fabric and to national security.

This is all possible. There’s no way in which I can definitively refute the possibility that Chinese companies, under the control and patronage of the Chinese government, might use AI models and services to subvert and undermine other nations, including the U.S.

That said — and here’s where we venture into a disturbing set of possibilities — if China can wield AI for malicious purposes, can’t others, including those who wrap themselves in the American flag, do the same thing? Can’t some of these companies, which possess unprecedented wealth and power, abuse the privileged trust that AI seemingly confers?

If DeepSeek might abuse AI for illicit gain, what’s stopping anybody else, including OpenAI, from doing likewise? It’s all a question of trust, I suppose, but now that we have reason to suspect that an AI buddy might not be all that aligned with one’s own interests, how much trust do you want to place in it? Does the country of origin make all the difference? Especially at a time when regulatory oversight might be gutted and corporate irresponsibility might be more common than corporate responsibility?

Look, if the Chinese can use technology for malign purposes, so can anybody else. To make matters worse, it’s not even clear, at this point, that you’re getting all the much value from AI’s Faustian bargain.

The digital mask has slipped and we see that AI might obscure a hidden agenda involving furtive control and social manipulation. Those who own AI models and services might never be able to achieve those objectives, perhaps because the technology remains overhyped or perhaps because they’ll be constrained from using AI in ways that abuse privilege and trust. Nonetheless, we can’t say we haven’t been given fair warning.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe