On the Need for AI Regulatory Control
But the motivation for regulation isn’t the specter of superintelligent AI autonomy
In Ian McEwan’s latest (and perhaps last) novel, What We Can Know, part of the novel is set in a post-apocalyptic future, ravaged by environmental cataclysm and nuclear warfare. This narrative strand, involving posterity looking back at us from the distant vantage of 2119, recounts (among other things) that we made the grievous mistake of allowing private ownership of AI, an unprecedentedly powerful information technology.
What We Can Know is a novel, a work of literary fiction, so its contents are neither a historical account nor necessarily the author’s opinion. Still, we might for forgiven for wondering, even from our limited contemporary standpoint, whether the dissemination of AI should be treated a little differently from the manufacture and sale of garden-variety widgets or the tech products that preceded it. I don’t believe that AI is conscious of itself, nor do I believe that it can act of its own volition (more on which later). Nonetheless, I recognize that AI represents a form of intelligent automation that could do profound harm in the venal hands of malevolent human actors.
Anthropic CEO Dario Amodei has repeatedly banged the drum of AI doom. He might be right about the dangers associated with AI, but I don’t think the danger is autonomous AI, which does not and probably cannot exist. Instead, I posit that the risk derives from how and for what purpose AI is designed and deployed by companies driven primarily, if not exclusively, by an amoral profit motive.
In interviews, including one published (twice) in Fortune, Amodei has said that he and other CEOs of companies that own and operate frontier models should not be defining and enforcing AI guardrails. He has spoken often about the imperative for regulatory oversight and control of the AI industry. Here’s a quote from the Fortune article:
In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of big tech companies.
“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”
“Who elected you and Sam Altman?” Cooper asked.
“No one. Honestly, no one,” Amodei replied.
We’re the problem
It’s true that neither Amodei nor Altman has been elected AI guardian. It’s also true that there are no elections in AI frontier-land. Everything that happens in AI currently is mediated exclusively by market forces. There are no checks and balances, no countervailing forces that would prevent a company from releasing models and bots that could be wielded to devastating effect.
This, by the way, is where Amodei misdiagnoses the nature of the AI threat.
Anthropic is on record as saying that it has thwarted what it calls “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.” You probably noticed, as I did, that the preceding statement includes a significant qualifier: the word “substantial.” Amodei and company (literally) portrays the foiled attack as one launched by automated machine intelligence, but the use of the word “substantial” implies that there was human intervention, likely in the form of explicit direction. The question here is what “substantial” connotes. Anthropic, for its own unknown reasons, have decided to downplay the role of human intervention in the attack. I have chosen to take a different view.
As we discussed yesterday, the existence of volition meaningfully distinguishes human conduct from that of machines, even now. The dictionary definition of volition is “the faculty or power of using one’s will.” AI does not possess consciousness, much less volition.
Consciousness, volition, ambition, and greed are all human attributes. These endowed attributes can provide animus and impetus to AI systems, but AI systems, without the defining touch of human imposition, are simply unequipped to provide their own teleology, or purpose. And, yes, when it comes to AI, humans function as gods of origin, the entities that created the world of AI and remain its active overseers.
Amodei and others are right to warn about the destructive potential of AI. The threat, however, comes from the humans who create, own, utilize, and abuse the technology rather than from the technology itself, which receives instruction and marching orders from human generals.
Regulation is necessary, but the regulatory structure must have integrity and be as beyond corruption and reproach as circumstances and human fallibility allow. It’s a tall regulatory order, but we should be giving seriously thought to ensuring its ethical and functional efficacy.