Past is Prologue, and Forewarned is Forearmed
I remember clearly when I first experienced the World Wide Web (WWW). I was visiting the Toronto office of the Santa Cruz Operation (SCO) during the spring of 1991, and a group of developers asked whether I’d be interested in taking a trip through cyberspace. At first, I thought illicit substances might be involved, but, no, they ushered me to a UNIX workstation running an early web browser. I was then given an opportunity to do some early web browsing, using hyperlinks to move among static HTML websites. If contemporary users were to see those sites now, they’d flinch in horror at the text-heavy aesthetics and crude design, but what I saw back then was nothing short of a revelation.
As I browsed the nascent web, I spoke excitedly with the developers, who became SCO employees as a result of the acquisition of HCR Corporation in 1990, about what this new World Wide Web might mean for the future of people and societies. Context is always important, so remember that I was still in my 20s and not yet fully wise (or cynical) to the ways of the world. Remember, too, that I was speaking with developers and engineers, not ruthlessly ambitious Silicon Valley entrepreneurs or venture capitalists.
We envisioned various scenarios, most of them reflections of our relative ingenuousness and optimism. As we thought about how this new technology liberated vast storehouses of information and knowledge, we foresaw a world of potentially infinite learning and expanding wisdom. People would become more erudite, more understanding; the world would gradually be shorn of its ignorance, prejudices, and petty bigotries. To an unprecedented degree, we would become enlightened, able to imbibe the profundity of the world’s most eminent philosophers, scientists, historians, artists and domain experts. That was to say nothing of the research possibilities, the capacity of universities and others to share knowledge and breakthroughs that would monumentally advance education, healthcare, and environmental sustainability.
At this point, dear reader, you’re probably smiling ruefully at my youthful ignorance and naivety. Sure, some of the sunny scenarios we envisioned back in 1991 came to pass, but so did a lot of baleful outcomes we hadn’t considered. Nobody on that hopeful spring day envisaged hacking, distributed denial of service (DDoS) attacks, phishing onslaughts, and other forms of online marauding. Nobody among our group predicted the death of privacy, or the rise of a social media that would be nominally free but cost us dearly in so many other ways. We couldn’t know then that the future would include targeted, pervasive, and incessant advertising onslaughts that miss no opportunity to importune aggressively, like confrontational zombie beggars.
Limits of Unchecked Optimism
No one could foresee, or wanted to imagine, the manifold idiocies and wretched criminality of the darkweb, where the worst impulses of humanity are indulged in a marketplace of boundless degeneracy. To a lesser degree, nobody foresaw the creation and evolution of Twitter (now X, a name that is unfortunately reminiscent of Atlanta gentlemen’s clubs of the 1990s), which has evolved from being an often amusing and sometimes informative digital forum into a ramshackle ruin of indiscriminate anger, bitterness, disinformation, propaganda, and blathering incoherence. Algorithms took us there, and too many of us stumbled blindly into the shambling abyss.
I recall my misplaced optimism of the distant past only to frame it as prologue to the future of AI. We’re hearing about how and why AI will “save the world,” with the usual self-interested cheerleaders and parade marshals at the movement’s vanguard. While saving the world is a bridge too far for any neutral technology that depends on fallible human agency, it’s true that AI appears destined to facilitate material advances and undeniable progress in many realms of worldly endeavor. There are risks, however, and we owe it to ourselves and posterity to resist the blandishments of the utopians and to foresee the countervailing threats clearly. I don’t want to speak generically or sweepingly of malignant human nature, but we are well advised to look unflinchingly at how we’ve used past technological innovations to exploit and ensnare as well as to enhance and enrich the wider human community and the surrounding natural environment.
Consider one of the first use cases suggested for the coming AI era: the AI tutor for children, the AI “life coach” for adults, and the AI collaborative partner for scientific researchers. There are other variations on this theme, but you get the idea: AI will be your personal guide, and will, in the words of Marc Andreessen, be “infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful.” With all due respect to Andreessen and the VC cohort he represents, past experience and due diligence suggest that AI reality might not live up to the market-making boosterism and hype.
In a recent opinion piece published by The Hill, Swarat Chaudhuri, professor of computer science and director of the Trustworthy Intelligent Systems Laboratory at the University of Texas at Austin, suggests that we should ask whether an AI life coach might be programmed to condition outcomes that serve interests other than those of its users.
Quoting Chaudhuri:
“Just like earlier systems for generating recommendations, these AI confidantes will be ultimately designed to create revenue for their developers. This means that they will have incentives to manipulate you into clicking on ads forever, or to make sure you never cancel that subscription.
The ability of these systems to continually generate new content will worsen their harmful impact.
These AIs will be able to use pictures and words newly created for you personally to soothe, amuse and agitate your brain’s reward and stress systems. The dopamine circuits in our brains developed through millions of years of evolution. They were not designed to resist the onslaught of continual stimulation tailored to fit your most intimate hopes and fears.”
Chaudhuri notes that early forms of generative AI, including ChatGPT, have demonstrated an inclination to confabulate when definitive data on a given topic is disputed or ambiguous. With that in mind, we have reason to posit that AI life coaches might prescribe errant, faulty, injurious, and potentially disastrous recommendations to users. Chaudhuri concludes his opinion piece by stressing the need for a policy framework that restricts the harm that AI can do while allowing beneficial applications of the technology. He warns that cognitive agency — the ability for humans to act on the basis of genuine free will – could be compromised by carelessly deployed technology.
Won't Get Fooled Again
We can hope that our government stewards mix a perfectly balanced cocktail that protects against the worst abuses of AI while ensuring that salutary uses of the technology are unhindered. We should encourage the accountability and responsibility of our elected officials, who, in theory, work for us. That said, I’m trying to learn from history, my own and that of others. I don’t want to get fooled again.
Caveat emptor is a maxim that has stood the test of time, and for good reason. It has its practical limitations, though. The buyer, for example, should not be solely responsible for validating the quality and suitability of goods or services obtained in an imperfect world hosting fraudsters, embezzlers, racketeers, and other abusive parties. Nonetheless, forewarned is forearmed, and the buyer should be beware, having full cognizance of the risks inherent in downloading and accepting the counsel of an “AI personal coach” that might have been developed to achieve objectives, potentially furtive and undeclared, at variance with those of the customer.