Why AI Will Not Save the World

A couple months ago, Marc Andreessen, he of the (partially) eponymous Andreessen Horowitz venture-capital behemoth, published a lengthy blog post on his company’s website extolling the attributes, capabilities, and “world-saving” virtues of artificial intelligence (AI). He actually titled the post, “Why AI Will Save the World.” (The title raises concern. How does one define “saving the world, “and how does one accomplish the task comprehensively and decisively, so that it doesn’t need to be done again at a later date by somebody else? Saving the world is a wildly grandiose initiative for anybody to pursue, but everybody in Silicon Valley seems to be doing it.)

All things considered, Andreessen’s latest clarion call isn’t his best work as a digital pamphleteer. Then again, Andreessen’s best work sets a high bar.  More than a decade ago, he composed the strikingly prescient “Why Software is Eating the World,” which made an incontrovertible case for the embrace of software and digital processes across industries and geographies. Software was eating the world, and you had the option of joining the feast or being eaten.

In his latest missive, turning his attention to AI, you can see that he has a passion for the topic. Whether he believes everything he’s writing is another matter. The entirety of “Why AI Will Save the World” is haunted by an aura of authorial disingenuousness. Andreessen hastily assembles and then pulverizes a series of gawping straw men in pursuit of a simplistic narrative that arrays white-hatted heroes, all ardent proponents and purveyors of AI, against the black-hatted neurotic “doomers” and their dastardly self-serving manipulators. It’s superhero-versus-villain stuff, like an outline for an idiosyncratic Marvel cinematic opus, with Captain AI getting the better of a nefarious mob of dim Luddites, mouth-breathing politicians, and scheming monopolists.

Drubbing the Doomers

He really does let loose. In canvas-spattering, abstract-expressionist exuberance, Andreessen depicts the “doomers” as quivering simpletons, incontinently afraid that AI will kill us all, ruin our society, take all our jobs, result in crippling inequality, and — my personal favorite on the strength of condescension alone — “lead to bad people doing bad things.” The formulation and tenor of that phrase — “bad people doing bad things” — seems to mock any conceivable harm that people could do, intentionally or inadvertently, with a technology as powerful as AI. Make no mistake, Andreessen sees the power and the glory in AI’s promise. He sees more than that, of course. He says explicitly in his post that the stakes are high, the opportunities profound. He wouldn’t have written the post otherwise.

When he’s not giving the pitiable doomers a forceful drubbing, Andreessen is lauding the virtues of AI. He envisages “AI making everything we care about better.” Not only that, but “every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.”

You might think that’s enough, but there’s more. It’s not just the children who will get an AI life coach. I quote Andreessen again: “Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.” On he goes from there, assigning AI advisers and therapists to all professions and industries.

AI Jeeves to the Rescue

When I first read this section of Andreessen’s post, the comic novels of P. G. Wodehouse came to mind. If Andreessen is to be believed, AI will allow each of us to become an information-age Bertie Wooster, served by our own artificially intelligent Reginald (“Reggie” to friends) Jeeves. If you’ll recall, Jeeves was more than a butler; he was a full-service valet, clever and shrewd enough to extricate Bertie and his hapless buddies from the stickiest of situations. Who wouldn’t want an AI version of Jeeves? (Product name: AI JeevesTM). I, too, occasionally need to be extricated from difficult circumstances, rescued from being hoisted on my own petard. Sign me up.

Of course, we’re a long way from the realization of that utopian scenario. Whether we’re talking about AI broadly, or generative AI with its large language models (LLMs), the technology is nowhere near capable of meeting the lofty goals Andreessen has set for it. That’s not say something along those lines will not materialize. Given effort, resources, and time, AI, in its various manifestations, might develop into something approximating at least part of Andreessen’s vision. Even if it gets there — and there’s an active debate within the AI community about whether AI’s “take off” will be relatively fast or slow — one suspects that Andreessen is gilding the lily in service to his case.

As for the gist of his case, his recommendations of what needs to be done, he lays it out clearly enough for our consideration near the end of his post. His bugbear is “regulatory capture,” which he defines as a regulatory framework finessed and stage managed by the technology industry’s largest AI players. He doesn’t name those big players, but you can guess who they are. Everything else is fair game, subject to the non-regulatory rules of market competition. Well, China is an exception. The Digital Cold War is undeniably here, at least for now, because Andreessen views China as an AI threat that must be opposed and vanquished. As for the “bad people doing bad things with AI,” he says the best way of thwarting the malefactors is through the development and application of defensive AI technologies.

My interpretation here is that Andreessen sees AI security as yet another lavish room in the resplendent mansion of cybersecurity. I can see how venture capitalists would benefit hugely from such an arrangement, but I’m not sure it’s an optimal dispensation for the rest of us. Even today, the attackers and offenders frequently seem at least a half step ahead of the defenders, and often many steps ahead when they’re preying on the naive. The rub is that Andreessen undoubtedly knows that current approaches and methodologies of cybersecurity would struggle, to put matters euphemistically, against the presumed active intelligence and adaptability of AI-enabled onslaughts.

A Prudent Time to Take Stock

Ultimately, it’s what he fails to declare that represents the core problem with Andreessen’s spiel. If the fate of AI were to be decided by a court of law, Andreessen would be forced to recuse himself as judge because of a glaring conflict of interest, an elephant that takes up more room in the virtual courthouse as his post builds toward its prescriptive peroration.

Perhaps I’m being nostalgic and overly romantic about the past, but this industry seemed to more candid, more honest in days of yore. People in the companies that built the foundational infrastructure for the internet and the web knew they weren’t saving the world, whatever that means. What they were building, and they were honest about it, were products and companies that had practical utility and long-term business value. I remember working with engineers who would keep the marketers grounded, preventing the fluff and the hype from taking dizzying flight. One lead engineer I knew, noticing the growing irrational exuberance of the product marketers to whom he was explaining the features of a new software offering, admonished his admirers by bluntly stating, “Look, it’s good, but we’re not curing cancer here.” Now, the industry is allegedly saving the world, humility and practicalities be damned.

Marc Andreessen knows that AI is not going to save the world. He also knows that the biggest threat associated with AI will not take the form of marauding self-possessed machines seeking to destroy humanity. That specter seems to me at least as outlandish as the saving-the-world fever dream. He’s perfectly aware that the biggest threat of AI results from the technology’s misuse, through human dereliction or malevolence. He doesn’t want us to take our time to prudently anticipate, identify, and preemptively mitigate the various dangers, deceptions, and threats posed by misapplied AI. Time, you see, is money, and there’s plenty of money to be made through timely (as in occurring yesterday) investments in AI. Visions of massive returns on AI investment dance in the heads of venture capitalists, Andreessen included. He has considerable skin in the game, and it shows in the stridency of expression and the simplistic portrayal of anybody with justifiable concerns as an abject “doomer.”

We all have a stake in the future. Many of us sense that the pace of technological advance has quickened, and that today’s latest technologies, featuring increasingly sophisticated forms of automated intelligence (if not quite human intelligence) offer an immense spectrum of possibilities. The problem is, our ability to understand the effects and repercussions of the newest technologies seems to falling behind the pace at which these technologies are proliferating and evolving. I am neither a moral philosopher nor a sociologist, rendering me unqualified to make an accurate diagnosis, but we seem, as a society, to be struggling to update our ethical, social, political, and economic constructs to account for the material and technological changes overtaking us.

Investment opportunities come and go. Like trains in a busy station, another will come along soon enough. Today, though, seems a time to pause, seriously reflect on how the future is likely to deviate from the present and the past, and to act responsibly with more communal enlightenment than narrow self-interest.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe