Amid the Media Deluge: Revisiting AI’s Big Questions

News articles, feature stories, and opinion pieces about artificial intelligence are proliferating wildly. If the technological advance of AI is outpacing its media coverage, then all the optimistic and pessimistic AI scenarios that philosophers, pundits, and scientists have propounded might materialize sooner than expected. Unfortunately, the worst of the pessimistic scenarios are very bad indeed, including the potential annihilation and extinction of humanity. Those apocalyptic visions understandably countervail and render irrelevant the considerable benefits that might accrue from fulfillment of the optimistic scenarios.

To the best of my knowledge, however, nobody on this grand orb has mastered the dark art of prophesy. Notwithstanding various claims of clairvoyance, the future, when it becomes our present, always brings surprises, delivering curveballs and events that could not possibly be anticipated far in advance. Surprises are usually unwelcome, which is why so many people dislike them even as they haplessly encounter them on a regular basis. We always meet change, but it’s never the change we expect. We are often capable of anticipating the general direction of change, but we can’t predict its particulars or its full ramifications. As humans, most of us struggle with change as much as we fail at prophecy.

The pointlessness of far-sighted prognostication doesn’t discourage its practitioners, who peer into their befogged crystal balls and spin their rickety wheels of fortune in vain bids to discern what others are incapable of seeing. I suppose it’s a facet of human nature, indicative of our impatience, restlessness, and even our innate optimism. In predicting a future, one must, after all, believe that a future is viable, however good or bad one might believe it to be.

So much is written and said about AI that it’s difficult to know who and what to believe. Even the acknowledged experts, the specialists in the field, differ in their assessments of where the technology stands today and how (and how quickly) it will evolve into something of considerably greater capacity and power.

Change of Unprecedented Scale

The media frenzy around AI has unleashed a tidal wave of coverage, and much of it leaves a foamy froth in its wake. Readers must navigate warily. The topic is inherently complex, endlessly multifaceted, and seemingly unprecedented in its portentousness. The advent of the Internet was doubtless a major moment in the march of technology and civilization, but fulfillment of AI’s promise would revise history and redefine the Internet as an enabling precursor to something of a different scale entirely.

Perhaps this is why even the most capable feature writers struggle to find the right narrative pitch when addressing AI. There is much to recommend, for example, in Nick Bilton’s recent Vanity Fair article, “Artificial Intelligence May Be Humanity’s Most Ingenious Invention—And Its Last?” The interrogative question mark in the title is a clue that the publication and the author are confounded by the baleful enormity of the subject matter, including the authorial need to reconcile, or at least balance, wildly divergent expert opinions as to whether AI will result in a cornucopia of human and business benefits or the final grim chapter for the human race. Obviously, the outcomes need not accrue to one polar extreme or the other, but even the vast range of mooted possibilities tells its own uncertain story.

Dealing with Imponderables

From the first paragraph, you can see how the author tries to meet the challenge by treating the subject with the distancing effect of irreverent humor:

We invented wheels and compasses and chocolate chip cookie dough ice cream and the eames lounge chair and penicillin and e = mc2 and beer that comes in six-packs and guns and dildos and the Pet Rock and Doggles (eyewear for dogs) and square watermelons. “One small step for man.” We came up with the Lindy Hop and musical toothbrushes and mustard gas and glow-in-the-dark Band-Aids and paper and the microscope and bacon—fucking bacon!—and Christmas. “Ma-ma-se, ma-ma-sa, ma-ma-ko-ssa.” We went to the bottom of the ocean and into orbit. We sucked energy from the sun and fertilizer from the air. “Let there be light.” We created the most amazing pink flamingo lawn ornaments that come in packs of two and only cost $9.99!”

With his whimsical opening gambit, Bilton suggests that diving headlong into a topic that posits everything from a great leap forward to the utter eradication and extinction of humanity might be too much for the author and the readers to bear. The concern about the latter is likely that they’ll find the material melodramatic and won’t take it seriously. As he mentions later in the article, the darkest of the AI scenarios are surreal in their severity.

Bilton eventual gets around to addressing AI in all its possible permutations, but first he feels some light comedy is necessary, so he takes on a kaleidoscopic review of humanity’s often whimsical innovations and inventions. I understand why he took the playful tack, but the subsequent effect, after all the baleful scenarios are put before us, is jarring and incongruous, signaling that the writer is staggered by the enormity of the topic at hand, not least the implications of the worst-case scenarios, which experts say have varying probabilities of actually occurring (anything from near 100% to a 1% chance of extinction ranging over periods from 10 to 100 years).

The article contains occasional dollops of irreverent humor, designed to leaven an otherwise dystopian meal. Cameos and commentary come from a variety of tech-industry notables, including Google co-founder Larry Page, who is reputed to have offered some disconcerting thoughts on speciesism against intelligent machines, as well as from Paul Kedrosky, Sam Altman, and others.

The Need to Interrogate Change

For any species, extinction is as bad as it gets. There’s no coming back from extinction. Once the extinct species leaves through the exit, there’s no door for re-entry. Therefore, it’s understandable why the finality of extinction is accorded a stunned and skeptical prominence in the narrative arc of the Vanity Fair article. Nonetheless, AI, embraced and used wilfully by other humans, need not finish us off to inflict considerable damage. The following excerpt from the article makes that point:

“One AI CEO I spoke with said that the CIOs of Fortune 500 companies are very vocal about cutting their workforce in half in the coming years—then it will likely be cut in half again and again, until a handful of employees are overseeing LLMs to do the same work thousands of people used to do. According to a report by the outplacement service provider Challenger, Gray & Christmas, which tracks layoffs across the United States, 5 percent of people who were laid off in the first quarter of this year lost their jobs to AI. Now, while that number isn’t staggering by any proportion (yet), what is distressing is that this is the first time in the 30-year history of the company’s report that it has cited AI as a reason for layoffs.”

Economic theory, and the historical record, suggests that each technological wave, considered across generational timelines, creates more jobs than it destroys. But perhaps it’s different this time. If AI is wielded successfully by Fortune 500 CIOs, as a scythe that mows down a strikingly prolific number of redundant staff, it’s difficult to discern how and where the hordes of displaced workers would find gainful employment. How many AI prompt engineers does an enterprise, and the economy to which it belongs, really need? Is it not possible for AI systems themselves to develop, as they evolve further, the capacity to compose their own prompts, displacing their human attendants?

We can’t predict the future with unerring accuracy, but those are the sorts of questions we must consider, and answer decisively, if we want to improve the odds of favorable AI outcomes and mitigate the probability of darker eventualities. As the Pulitzer Prize-winning author Saul Bellow wrote: “We can’t master change. It is too vast, too swift. We’d kill ourselves trying. It is essential, however, to try to understand transformations directly affecting us. That may not be possible either but we have no choice.”

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe