AI and the Question of Trust

How to escape “human enfeeblement”

The New Yorker published a feature article this week that I encourage you to read. I’ve read the article, and I’m about to give you my impressions of it, but I always encourage others (and myself) to think for themselves and come to their own conclusions. I have no agenda here, other than to provide my thoughts as fodder for your further thoughts. It’s my most idea of public service.

So, what do I think about this New Yorker article? Let’s start with the title, which frames the context and asks a provocative question. The title is “Sam Altman May Control Our Future—Can He Be Trusted?” An animated illustration, presumably created with AI, features Altman’s likeness sprouting several interchangeable heads, their countenances expressing various emotions and moods.

For a long time, as an active participant and keen observer of the technology industry, I’ve thought about the changing nature of tech’s ruling class. Given the milieu, the question of trust is inescapable. Trust is important. Without trust, we have its obverse, distrust. Worse, we also have paranoia, often justified by the circumstances, not to mention dishonesty and duplicity. Maybe we have corruption, too, depending on how we approach that term. If you consult the dictionary definitions of corruption, you’ll see what I mean. Corruption extends beyond base venality.

Let me cut to the chase and provide an answer to the question posed in the article’s title. No, you cannot trust Sam Altman. He’s an opaque personality, perhaps a multitude of shifting personalities (as the article’s illustration suggests), a man who has said he learned more from playing poker than from attending classes at Stanford, and therefore a man adept at dissimulation and hiding his intentions. The article, well researched and written by Ronan Farrow and Andrew Marantz, offers many examples of how Altman has conducted himself as a technology executive, including in his current role as CEO of OpenAI.

As I read the article, however, I ruefully noted that Altman is not the only member of the technology industry’s vanguard who warrants distrust and wariness. The technology industry, especially the largest companies associated with AI, has become exceptionally big business. The weight and pressure of unprecedented capital tends to condition and constrain the behavior of CEOs. Power (and capital) undoubtedly corrupts, and when one is swimming in turbulent capital pools of billions and even trillions of dollars, the undertow of ruthlessness is irresistibly strong. If you weren’t a borderline (or full-fledged) sociopath before you were exposed to these pressures, you probably become one before long. Then again, to climb the greasy pole to power and reach the pinnacle of one of tech’s current corporate behemoths, you’d have to possess a superabundance of maniacal ambition and steely relentlessness. Shrewd intelligence helps, too.

Toxicity of Treacle

Traditional business media and social media tends to adhere to the “great man” historiographical profile, in which the CEOs of corporate giants are lauded and lionized, often unduly, for the successes of the companies they lead. It’s a form of hero worship. Worse, it’s an oversimplified, superficial analysis of all the factors that contribute to and sustain success in the technology industry (or any other industry, for that matter). How, amid all this bombast and obsequious flattery, is any mortal being expected to maintain a well-calibrate sense of sense, an honest assessment of one’s personal strengths and weaknesses? There’s a saying that people can come to believe their press clippings, and, while that is undoubtedly a cliché, it’s always largely true. One should remind oneself, however, that treacle can be toxic.

The corrective, of course, is a more complete understanding of what constitutes success and who (and what and how) contributes to it. Yes, that means painting a more cluttered and expansive canvas, but we should accept the challenge in the interests of mental, organizational, and societal health.

Aside from the salient question of trust, the New Yorker article raises troubling questions about whether an unconstrained market, impelled constantly by its inherent imperatives of revenue growth and increased profitability, should be the sole custodian of AI. I realize I’m cutting against the grain of techno-optimism and utopian libertarianism, but we should remind ourselves that behind all the “isms” and “ologies,” you will invariably find fallible human beings who, as history amply demonstrates, can do considerable damage when their worst impulses are given free rein.

Regulation is a verboten topic in the technology industry. Generally, I tend to agree with those who argue that is often unnecessary and counterproductive. But if AI ultimately evolves into a manipulatable, soulless form of directed intelligence (a term that is a minefield in itself) — I’m less concerned about AI acting on its own volition, namely because it doesn’t a will of its own — it will be a revolutionary (and therefore dangerous) technological innovation. That means the usual rules need not, and should not, apply. At that point, or beforehand, in anticipation of such a day, we should develop an appropriate regulatory regime that prevents potentially cataclysmic abuse and misapplication of AI technology.

In what he says will be his last novel, What We Can Know, Ian McEwan envisions a post-apocalyptic future in which AI, after contributing to cataclysm, is nationalized and strictly controlled. Obviously, this is merely the subplot of a novel, and I’m not saying we need to nationalize AI, especially now that many national governments are run by people whose venality often exceeds that found in the executive suites of the private sector. Still, we might have to look at AI differently than we considered servers, switches, routers, smartphones, and other infrastructure and products of technology’s comparatively innocent adolescence.

Defeating Enfeeblement

As we move into an uncertain future, we should learn from the past while also recognizing that what worked for us previously might need to be modified or even superseded completely by an entirely different way of doing things. As always, we should attempt to overlay our self-interest with an enlightened understanding of where our personal rights end and the rights of others begin.

Aside from the question of trust, whether it is placed with Sam Altman or any of the other megatech chieftains, I draw your attention to the following excerpt from the feature article under discussion:

A.I. doomers have been pushed to the fringes, but some of their fears seem less fantastical with each passing month. In 2020, according to a U.N. report, an A.I. drone was used in the Libyan civil war to fire deadly munitions, possibly without oversight by a human operator. Since then, A.I. has only become more central to military operations around the world, including, reportedly, in the current U.S. campaign in Iran. In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents. And many more mundane harms are already coming to pass. We increasingly rely on A.I. to help us write, think, and navigate the world, accelerating what experts call “human enfeeblement”; the ubiquity of A.I. “slop” makes life easier for scammers and harder for people who simply want to know what’s real. A.I. “agents” are starting to act independently, with little or no human supervision.

The term “AI doomers” is clearly loaded and highly pejorative, consigning to the bozo bin anybody with concerns about the technology or its corporate custodians. It’s a way of defaming, even demonizing, critics. Still, that’s not my primary interest with the excerpt above.

Instead, it’s the last sentence of that paragraph that demands notice. AI agents might be let loose, technically operating without supervision, but they don’t define their own purpose, and they have no purpose other than that defined for them by humans. The objectives of AI agents are set by the humans who own and operate them. AI agents and bots cannot define their own teleology, nor do they decide what data they ingest or how they are to be trained on that data; they exist to serve the interests of their human masters.

That said, the detail that most drew my interest is the reference to “human enfeeblement.” I hadn’t heard or seen the term before, though the underlying concept is familiar to me and probably to you. If we acquiesce to AI systems owned and operated with the ulterior motive of vitiating our ability to think critically, we will become nothing more than feckless pawns. We might already be propped up on a moving sidewalk to intellectual oblivion.

That’s another reason why I encourage you to read the article. After you’ve read it, you will undoubtedly have your own thoughts, and that is how it should be.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe