Annihilation or Utopia? AI Likely to Land Somewhere Between the Two Extremes

A couple recent articles got my attention, and I thought I’d offer thoughts with which you might enthusiastically agree or vehemently disagree. Either reaction is acceptable. The only unacceptable reaction would be apathy and indifference. We’re here to think critically, not to sleepwalk.

The topic of AI is unavoidable. You can try to escape, but it will hunt you down, like a psychopath in a slasher movie. Unfortunately, in the movie, you’re playing the part of a supporting character who is compelled, for whatever strange reason, to run toward the danger rather than away from it.

You’ve probably noticed that danger and AI are occasionally conjoined these days, at least in the media. Some AI pundits, and even a few experts, contend that we’re getting closer to a point where AI will equal or surpass human intelligence. These commentators are often ambiguous in their definitions of human intelligence. They never tell us whose human intelligence or what sort of human intelligence AI will equal or exceed. Is it an aspect of human intelligence that deals exclusively with closed systems, or is it one that is capable of dynamic adaptation in the necessarily contingent, dynamic real world in which we live?

At any rate, the MAGA warlord and DOGE overseer Elon Musk, who holds too many other distinctions and titles to cover in a relatively simple parenthetic clause, says there is only a 20% probability that AI will annihilate humanity. He seems to believe we should take comfort from those odds, but I’m not so sure. A one-in-five chance of annihilation means obliteration is a more likely outcome than winning at roulette (2.7% on a straight-up wager) in Las Vegas. Statisticians report, however, that you’d have a better chance of success playing Chemin De Fer, a la James Bond, in Monaco. In a well-ordered world, It should be easier to win at games of chance than to succumb to extinction.

Depending on whom you ask and how you look at the intersection of AI and eternal doom, Musk might actually rate as an optimist. As a Business Insider article that features Musk’s musings recounts, cybersecurity director Roman Yampolskiy has said the "probability of doom" is 99.999999%. Well, at least we have that .000001% in our favor.

On further review, I’m not sure Musk is proficient with odds and statistics. He’s quoted in the same article as telling Joe Rogan that the future of AI will be “either super awesome or super bad," adding that he doesn't see it being "something in the middle." If that’s the case, shouldn’t he adjust his odds a little more toward annihilation? If he foresees no middling or bland outcome, that leaves a stark spectrum with brightness and light on one end and eternal darkness at the other end. I suppose Musk is saying that his spectrum has a lot more lightness than darkness. We should take solace wherever we find it, I suppose.

We’re Not Curing Cancer — Or Are We?

I don’t think Musk is a betting man — not like Drake, who seems to lose more wagers than he wins —but, if Musk were a betting man, and supposing he were willing to play within my range of stakes (that is, not much), I’d take the mediocre outcome and set a time limit on it. Let’s say five years. I’m strongly inclined to accept the probability that we won’t all be destroyed within that timeline. Similarly, I am of the belief that AI will not have transformed our world into a heavenly orb. The results will be readily apparent, but mostly underwhelming in relation to the hype that preceded them. There will be advances and there will be drawbacks, but we’re still going to have to do the things that humans already do today — think, feel, adapt, work (though Silicon Valley has other ideas about who or what will perform gainful employment), and amuse ourselves with entertainment created mostly by other humans.

I can’t imagine a world where we would want to programmatic, vapid diversions of AI’s regurgitations. It would be more dystopia than utopia.

Meanwhile, Dario Amodei, CEO of richly valued AI company Anthropic, predicts that “superintelligence”, which he defines as AI systems that are more capable than Nobel prizewinners in most fields, could arrive as soon as next year. That seems aggressive, but then again you have to remember that Amodei has a vested interest in promulgating the rapid ascent of AI, setting optimistic expectations that persuade investors to reach for their wallets, digital or otherwise.

Questioning the "Super" in Superintelligence

Indeed, Anthropic will need successive investment rounds until it reaches the point where its revenue, which will have to grow exponentially, exceeds the company’s already prodigious capital and operating costs. Anthropic is a private company, but the consensus is that it is losing billions of dollars. Anthropic’s objective is to have its Claude LLM integrated into for-pay applications and processes used by businesses, governments, and consumers. The company offers a basic free Claude service, but it also provides an $18-a-month “pro” tier, plus incrementally priced plans for companies that use Claude to build AI applications.

Anthropic’s present doesn’t involve market domination or striking profitability, so it must sell investors on a brighter future. That’s why the company’s executives tout the future exploits of superintelligent AIs, or what Amodei colorfully describes as “a country of geniuses in a datacenter”. He believes these AI datacenter geniuses are capable of compressing a century of scientific progress into a decade. Amodei sees them achieving astounding outcomes for humanity: doubling our lifespans, curing nearly all infectious diseases, and finding cures for Alzheimer’s and cancer.

I hope it all happens, but a little skepticism seems warranted.

When I first got into the technology industry, a colleague at one company advised the younger employees, a cohort to which I belonged at the time, not to take themselves too seriously. “After all,” he said, “it’s not as if we’re curing cancer.” But now, that’s precisely the goal AI purveyors have set for themselves. They’re ambitious, yes, but also deadly serious. As I said, one hopes they’ll succeed, but if the track record of information technology is any indication of what the future will bring, predatory hucksterism often gains a commercial edge over altruism and idealism. If it pays, the market will do it, and not everything that pays is beneficial to the common good.

The issue is not so much what AI might accomplish — though it remains a valid question — but how and where entrepreneurs, investors, and for-profit companies will direct and apply the technology. Previous waves of information-technology innovation also promised Shangri-Las, and what we got instead mixed blessings — some good, some bad, and a lot of stuff that’s mindlessly diverting rather than world changing.

Amid the Thin Air of Great Expectations

Amodei, as the CEO of an AI company, feels compelled to set some grandiose expectations. Here’s what he told _The Times (UK)_:

"AI is going to be better than all of us at everything,” he said. If he’s right, we will have to figure out a new way to orient society around a reality where no human will ever be smarter than a machine. It will be a transition not unlike the industrial revolution, he said, when machines supplanted human brawn — and reordered the western world in the process.
“We probably, at some point, need to work out another new way to do things, and we have to do it very quickly.” Part of the solution, he argued, will probably include some form of universal basic income: government hand-outs to underemployed humans. “It’s only the beginning of the solution,” he said. “It’s not the end of the solution, because a job isn’t only money, it’s a way of structuring society.”

Let’s give Amodei credit for saying the quiet part out loud. Many other AI proponents prevaricate when discussing how AI will impact jobs currently performed by human beings. Amodei comes right out and says explicitly that AI is destined to displace a significant share of human work. He attempts to assuage the threat by positing the prospect of a universal basic income.

Maybe such an eventuality will come to pass, but the current political climate makes it a long shot. Just look at how DOGE is pulverizing and sacking the U.S. federal government and how other Western governments are cutting rather than expending social programs. We’re heading not into a period of enlightened self-interest, where taxpayers and governments understand that a healthy populace and robust economy depend on government investments and permanent programs, but into a darker era of abstemiousness and myopia. A lot would have to change, politically and socially, before the vision of a universal basic income becomes anything more than a wistful promise.

For the record, I don’t believe AI will take your jobs, certainly not those of knowledge workers. Based on where AI is today and where it is likely to be in the next few years, it isn’t anywhere near intelligent enough to do meaningful interpersonal work, which requires discretion, diplomacy, empathy, nuance, subtlety, and the crucial ability to “read the room.” It has no first-hand experience of how to finesse difficult conversations or problematic situations.

We have to remember, however, that Amodei is selling a vision to investors, pitching AI as the magic beans that cut costs by doing away with scores of employees — who get sick, take time off, go on vacations, and won’t work around the clock — and that also create new revenue streams through a superintelligence that delivers outcomes beyond our pitiable human ken.

What we have to figure out is how much of what Amodei is saying is a sales pitch, understandable coming from the CEO of a AI company, and how much of it is a realistic and sincere assessment of what’s likely to happen.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe