Some of Us Might be Done with AI, but AI Isn’t Done with Us
Despite the Hype, We Might Want to Prepare for Adjustments
You can’t get away from AI. Or maybe you could, but you’d need to find someplace off the grid, in the wilderness, or in outer space. Then again, Elon Musk and the other billionaire rocket merchants would probably ensure that you were connected to AI even in the hinterlands of the solar system.
No matter. I’m not in such isolation, splendid or otherwise.
Instead, I’m connected to the world, ingesting data, ideas, and thoughts, all while trying to filter out the crap and nonsense. In that sense, what I’m doing is the cognitive analogue of smoke detecting and pool filtering, admittedly a weird combination. Where, and for what earthly purpose, would you use a hybrid of a smoke detector and a pool filter?
Allow me to attempt an explanation. In the imagined cognitive scenario, the metaphorical smoke detector is actually a bullshit detector, and the figurative pool filter represents a means of intercepting and removing ideas that, while not necessarily bullshit, are noxious. This rebuffing and sifting is a full-time job, but I suppose any sentient human being does it every single day in these eventful times of ours.
Even when we don’t want to think about AI, having heard and seen enough of it to have slumped into a jaded stupor, we still feel compelled to resolve nagging questions about the technology. These questions are oddly philosophical, metaphysical, epistemological, and ethical. It’s odd how the onset of AI has provoked so much introspection and reexamination of what it means to be human.
Not in the Metaverse Anymore
Believe me, I’d like to conclude that AI is just another tub-thumping, technology-industry hype cycle — remember Zuckerberg’s excitable gesticulations about the dawning of the Metaverse? — but I can’t dismiss this stuff unreservedly. The reason I can’t dismiss AI is because it seems to be reviving so many big questions that we thought we’d resolved in our time. (My time, as it were, technically spans centuries and millennia, so I can claim a patina of venerable wisdom.)
The thing is, our embrace of AI seems to throw everything into the volatile machinery of contingency. What will happen next? We really have no idea. The rules of cause and effect are still with us, but the discrete causes and effects are subject to sudden change. Even the so-called experts, the people you invariably consult for knowledgeable maze navigation, disagree profoundly and vehemently on AI’s essence and ultimate capabilities. Some of them, shockingly, admit to being just as flummoxed as the rest of us.
We’ve stumbled into an epochal conundrum, a problem of another order. Our capacity to understand what’s coming at us seems to demand finesse and subtlety, as well as a certain ease with ambiguity. Unfortunately, too many of us have rapidly diminishing attention spans combined with an intolerant insistence on simple binary answers: black or white, right or wrong, good versus evil. These are circumstances where reaching for the easy button might be the worst option, but we’ve become hooked on the simple. We live in a strange time that prizes snap judgments (so-called hot takes), ostentatious self-indulgence, and instant gratification. Sober second thought doesn’t take a selfie.
All of which brings me to an interesting article in the New Yorker that I recommend you read. The author, Joshua Rothman, confronts the intense bifurcation of AI viewpoints among the intelligentsia. He tries to reconcile — or at least understand —the clashing viewpoints. Here’s an excerpt that frames the nut he’s trying to crack:
Which is it: business as usual or the end of the world? “The test of a first-rate intelligence,” F. Scott Fitzgerald famously claimed, “is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” Reading these reports back-to-back, I found myself losing that ability, and speaking to their authors in succession, in the course of a single afternoon, I became positively deranged. “AI 2027” and “AI as Normal Technology” aim to describe the same reality, and have been written by deeply knowledgeable experts, but arrive at absurdly divergent conclusions. Discussing the future of A.I. with Kapoor, Narayanan, and Kokotajlo, I felt like I was having a conversation about spirituality with Richard Dawkins and the Pope.
In the parable of the blind men and the elephant, a group of well-intentioned people grapple with an unfamiliar object, failing to agree on its nature because each believes that the part he’s encountered defines the whole. That’s part of the problem with A.I.—it’s hard to see the whole of something new. But it’s also true, as Kapoor and Narayanan write, that “today’s AI safety discourse is characterized by deep differences in worldviews.” If I were to sum up those differences, I’d say that, broadly speaking, West Coast, Silicon Valley thinkers are drawn to visions of rapid transformation, while East Coast academics recoil from them; that A.I. researchers believe in quick experimental progress, while other computer scientists yearn for theoretical rigor; and that people in the A.I. industry want to make history, while those outside of it are bored of tech hype. Meanwhile, there are barely articulated differences on political and human questions—about what people want, how technology evolves, how societies change, how minds work, what “thinking” is, and so on—that help push people into one camp or the other.
Zombie Questions and the Criticality of Agency
There you have it. Just under the surface of the debates about what AI signifies and portends across the vast landscapes of careers, industries, and lives, there are latent questions rising again like the undead. We have to think anew about all this stuff, and look in the mirror while doing it.
That last part, looking in the mirror, is particularly important because, ultimately, we decide how this technology will be used and regulated in our society. Further, what we think about AI — whether we’re optimistic or pessimistic or indifferent about the technology — derives at least partly from our characters and dispositions.
Regardless of whether our glass is half full or half empty, AI is created by people, and people (at least some people) decide how, when, where, why, and what it will be used to do. Purpose is a sentient attribute, and in the absence of hypothetical AI superintelligence, which might remain ensconced in the realm of science fiction, AI does not possess human sentience.
In his New Yorker article, Rothman discusses recursive self-improvement (RSI), a hypothetical process in which AI programs perform their own research, incrementally improving in a perpetual flywheel or feedback loop. Eventually, AI programs, accelerating their self-improvement processes, run so far ahead of their human programmers that they determine their own course, circumventing human controls and constraints.
It’s speculative, of course, but is it really possible? From the article:
No one really knows for sure. That’s partly because A.I. is a fractious and changing field, in which opinions differ; partly because so much of the latest A.I. research is proprietary and unpublished; and partly because there can be no firm answers to fundamentally speculative questions—only probabilities.
Rothman discerns that the probabilities themselves are subject to the subjectivities of worldviews.
But what is a world view, ultimately? World views are often reactive. We formulate them in response to provocations. Artificial intelligence has been unusually provocative. It has prompted reflections on the purpose of technology, the nature of progress, and the relationship between inventors and the rest of us. It’s been a Rorschach test. And it’s also arrived at a particular moment, in a particular discursive world, in which opinions are strong, objections are instant, and differences are emphasized. The dynamics of intellectual life lead to doubling down and digging in. We have feedback loops, too.
Whither Superintelligence?
The article concludes that our working lives might soon play out on “cognitive factory floors,” working alongside machines that augment our capabilities and expedite productive processes. Machines, though, will not take over, because of what Rothman calls a condensation of responsibility that occurs as the ratio of productive humans to automated machines diminishes. At the end of the automated day, agency, and the burdens it carries, remains with the humans on the cognitive factory floor, or at least with the humans in the higher reaches of the corporate hierarchy.
People — not all people, of course, but some people — call the shots. Given that people create and apply technologies, it’s difficult to envision how technology could possibly reverse the dynamic and seize control. The only way that could happen, even presuming that technology acquires something approximating superintelligence, is if the people in charge ceded control, willingly or through dereliction.

Remember, though, the people who run things in our world are highly ambitious and often just as ruthless. I just can’t envision them getting so slack that they’d take their eye off the prize and lose control. So, people will remain in command, using technology for purposes both benign and malign. That part of the story will be the same as it’s ever been.
As for the rest of us, our duty, as concerned citizens who have a considerable stake in the outcome, is to hold the people in charge to account. We should do what we can to ensure that technology is used to achieve the greatest possible good and the least harm. Those are easy to words to type, but I recognize that fulfilling our part of the bargain will not be a breezy walk on the beach.
Jobs for the Bots
The issue of human responsibility in an AI-inflected world also arise in an article published yesterday in Axios. In that piece, Anthropic CEO Dario Amodei suggests that AI might obliterate half of all entry-level white-collar jobs, contributing to an unemployment rate of 10 to 20 percent in the next one to five years.
He doesn’t stop there. Amodei asserts that AI companies and governments are not giving us the straight goods about impending economic dislocations, including major job losses in technology, finance, law, consulting and other white-collar professions. Entry-level jobs, according to Amodei, will be particularly hard hit, especially entry-level gigs, where young professionals begin their long slog and strenuous climb on the greasy pole of corporate hierarchy. The consequences, for younger workers and society at large, could be considerable, with countless career progressions delayed or derailed.
Amodei says Anthropic’s own research indicates that companies begin using AI for role augmentation, helping humans do jobs more efficient and faster, but then consider whether and how AI’s intelligent automation might perform certain jobs, thereby displacing employees. The article cites several instances in which companies have followed such an evolutionary trajectory, using AI first to boost employee productivity before using it to make certain jobs redundant. An AI agent, after all, doesn’t require breaks or vacations, and it can work all day and all night because it doesn’t need sleep. What’s more, AI doesn’t require benefits packages or healthcare, and it will never complain or file litigation against its employer (as far as we know).
Nobody agrees on whether AI will reach its ballyhooed promise as a breathlessly hyped “world-changing” technology. That sort of sensationalism causes my eyes to glaze over, but perhaps Amodei, who has a vested interest in AI’s commercial success, is telling us what he believes to be the truth. To gauge whether his prophecy is accurate, we should closely monitor the nature and frequency of layoffs and hiring practices across industries and professionals. The data will tell us whether AI is cratering the job market. If that happens, we can take remedial measures, or at least prod others, including our elected representatives, to do something about it.
For his part, Amodei proposes measures intended to mitigate economic and career carnage. The article lists Amodei’s proposals:
1) Speed up public awareness with government and AI companies more transparently explaining the workforce changes to come. Be clear that some jobs are so vulnerable that it's worth reflecting on your career path now. "The first step is warn," Amodei says. He created an Anthropic Economic Index, which provides real-world data on Claude usage across occupations, and the Anthropic Economic Advisory Council to help stoke public debate. Amodei said he hopes the index spurs other companies to share insights on how workers are using their models, giving policymakers a more comprehensive picture.
2) Slow down job displacement by helping American workers better understand how AI can augment their tasks now. That at least gives more people a legit shot at navigating this transition. Encourage CEOs to educate themselves and their workers.
3) Most members of Congress are woefully uninformed about the realities of AI and its effect on their constituents. Better-informed public officials can help better inform the public. A joint committee on AI or more formal briefings for all lawmakers would be a start. Same at the local level.
4) Begin debating policy solutions for an economy dominated by superhuman intelligence. This ranges from job retraining programs to innovative ways to spread wealth creation by big AI companies if Amodei's worst fears come true. "It's going to involve taxes on people like me, and maybe specifically on the AI companies," the Anthropic boss told us.
Better Safe than Under Siege
One idea Amodei floats is a token tax, which would be paid every time a customer uses a model that generates revenue for an AI company, such as Anthropic. Amodei says the token tax could be set at, say, three percent of revenue, with the proceeds allocated to some sort of reparatory wealth redistribution by government.

Amodei says such a tax would not be in his economic interest. That’s where he’s wrong. If economic displacement occurs on the massive scale Amodei anticipates, everybody (even the richest among us) will have an interest in facilitating economic, political, and social adaptation. Failure to make the right systemic adjustments could result in the sort of social unrest that can easily spiral into a much bigger problem for the billionaire class than having to endure a modest tax increase.
I’m not predicting a contemporary reenactment of the French Revolution and the Reign of Terror — I don’t think the dispossessed peasants of Silicon Valley will be rolling out an AI-enabled guillotine — but, hey, very few people predict those things. We tend to have sunnier dispositions, and we view the future optimistically, but history and prudence advise that, even while we hope for the best, we should also anticipate and guard against the worst.