There’s No Consensus on Our AI-Inflected Future

Recent Study Says AI Tools Actually Slow Down Software Development

Even the AI cognoscenti, however you define them, are of at least two minds when it comes to the prospects for the technology. Will AI immeasurably improve life on this planet, or will it make everything intolerably worse? Will AI usher in a new era of employee productivity, spawning jobs that don’t yet exist? Is AI an existential threat, or the next stage in human evolution, a vaulting step toward the misted peak of transhumanism?

You can ask these questions of ten AI experts, and you’d get different answers from most of them. Everybody has an opinion, informed or otherwise, but nobody really knows. That’s what makes speculating about the future so captivating. As a prognosticator, you can claim to have foresight, you can assure others of the acuity of your vision, and you can brashly declare that you’re right and everybody else is wrong. You can do it all that because nobody can prove you wrong, for the simple reason that the result has yet to be recorded in the book of history.

Even better, if you’re wrong about the future, few will remember your errant prognostication. I have a reasonably clear memory of Gartner predicting the overwhelming dominance of OS/2 over Windows in the late 1980s, yet I can find no evidence of that prediction now. It’s funny how that works.

Prognosticators can only feign, rather than possess, an unerring prescience. Prognosticators pretend they know what's coming, but they don’t really know. Nobody does. That doesn’t mean that some people’s views are more logically grounded than others, or that some opinions warrant more respect than others because they’re predicated on careful reasoning and data-based extrapolation. Such opinions have a greater probability of landing closer to reality’s bull’s eye, but they can still miss the mark, either because of a methodological oversight or an unpredictable development.

That means, as we look toward the future, we must embrace uncertainty. Can you live with ambiguity? Well, you have no choice. Living necessarily involves not knowing what will happen next.

Anxiety And Uncertainty

When there’s no consensus about the future — when we’re besieged by a stark disparity of views — people tend to get anxious. They want the illusion of certainty and predictability, even if actual certainty and predictability are unattainable. As we gaze into AI’s occluded crystal ball, we’re not even getting a coherent illusion. Instead, we’re getting the strident dissonance of strongly contrasting views.

Two articles this week illustrate the wildly divergent views on AI. From Reuters, we learn that AI research nonprofit METR (which stands for Model Evolution and Threat Research, even though Reuters refers to the organization only by its acronym) posits that AI tools slow, rather than accelerate, the work of experienced software developers.

Here are the opening paragraphs of the article:

Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.
AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.
Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.
The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”

Expectations Confounded

Great expectations are confounded, at least in this instance. Another study, from another group, might have produced different findings. Still, this research is worth considering, not least because it highlights a potential chasm that often exists between popular perception and reality.

You’ve probably heard or seen the hackneyed joke about the word assume, right? The punch line is contained within the word itself: When you assume, you make an “ass” of “u” and “me.” I try to stay grounded by reminding myself that I’m an ass. Try it sometime — I mean on yourself, not on me. I get enough grounding at home, where no bigheads are allowed.

Getting back to the METR study, the findings refute the convention wisdom — which, in practice, is no wisdom at all — that AI invariably makes human software engineers more productive.

Expect this study to provoke brickbats and rebuttal. The criticism will arise not because its methodology or its execution was wrong, but because the headpin in the “bowling alley strategies” of many AI companies involves increasing software-engineering productivity, if not replacing software engineers with bots that don’t incur salaries, benefits, vacations, or anything else apart from their recurring subscription costs.

Bowling for Dollars

One of the companies promoting AI for software development is Anthropic, whose CEO, Dario Amodei, recently told Axios that AI could wipe out half of all entry-level white-collar jobs, including many in software development, within the next five years. Given Anthropic’s aspirations in AI-boosted software development, Amodei might find reason to pore over the METR study in a bid to find fault with its methodology or its data integrity.

The lucrative headpin

When there’s big money at risk — and there’s a lot of money on the AI table — the brows are furrowed and the knives are out. It is a rich irony that the vision of dispassionate, coolly proficient artificial intelligence is being pursued within the all-too-human emotional and financial crucible of winner-take-most, late-stage capitalism.

Reuters points out that prior research conflicts with METR’s findings, though the news service doesn’t attribute those studies to specific sources. I have to say, that’s a significant journalistic omission. Readers should be informed of the provenance of research cited in an article. Another bugbear of mine is when reporters quote think tanks, but fail to tell us how the think tank is funded and who funds it. These are relevant points, which help us, as readers, to understand the incentives and motivations behind research initiatives.

As for the METR study, it found is that experienced developers — familiar with the vagaries and requirements associated with large, established open-source code bases — worked at a slower pace when using AI tools.

A direct quote from an author of the study:

“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,” Becker said.
The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with.

Interestingly, the majority of study participants, as well as the authors of the study, continue to use the Cursor AI software-development tool. The authors suggest that they do so because the tool makes software development easier and “more pleasant, akin to editing an essay instead of staring a blank page.”METR’s Becker observes that “developers have goals other than completing the task as soon as possible, so they’re going with this less-effortful route.”

CFOs Want Happy Employees, But Fewer of Them

I get that, I really do. That said, your garden-variety CFO might not see things the same way, particularly if they’re signing checks in the belief that AI will speed software-development processes, significantly boost productivity, and reduce headcount. Companies undoubtedly want contented and highly motivated employees, but they seem drawn to AI as a means of having fewer of those employees than they had before.

A recent article in Business Insider, building on an interview in The Guardian, gives us a decidedly different perspective on AI. The RethinkX think tank, which Business Insider does not identify further (there’s my bugbear, right on schedule) forecasts that robots and artificial intelligence could render most human jobs obsolete by 204. What’s more, the think tank says there is little time to prepare for the labor carnage.


Before I continue, let me provide a link to the “About Us” section of the RethinkX website. As I mentioned, whenever a think tank is cited in an article I’m reading, I like to learn a little more about the think tank, its funding, and its mandate.

Now, back to the article. Adam Dorr, director of research at RethinkX, warns that machines are advancing so quickly that they’ll be able to perform nearly every job done today by humans, but at lower cost and at equal or better quality. (Quality, between by definition a qualitative term, is difficult to define accurately without some relevant quantified metric, which we do not find in the article.)

Looking back at historical patterns of technology disruption, Dorr draws an analogy between what’s about to befall our contemporary workforce and what happened to horse-drawn carriages after the advent of cars. He also compares the fate of workers today to the obliteration of traditional cameras following the rise of digital photography.

He says, “We’re the horses, we’re the film cameras.”

Slipper Eels: Comparisons and Analogies

Maybe we are, but saying it does not make it so. The analogies seem superficially compelling because they attempt to logically conflate known pasts with an unknown future, contending that what’s coming is exactly like what we’ve seen before. We should remember, however, that analogies are comparisons that serve an expository purpose, namely to explain and unfurl a narrative. Invoking an analogy makes for evocative story telling, but it doesn’t necessarily mean that the analogy is apt. We won’t know whether the analogy has legs until the future becomes the present, which in this case is 20 years from now.

Here’s a quote from Dorr:

"Machines that can think are here, and their capabilities are expanding day by day with no end in sight," he said. "We don't have that long to get ready for this."

Can they, though? Can machines really think? What genAI does is not reasoning or thinking in any substantive human sense. As for artificial general intelligence (AGI), some technorati in Silicon Valley claim it’s already here, while others say it might never arrive. There’s also the troublesome matter of defining AGI, a term that seems to possess more mutability than some viruses.

Dorr is stating as fact something that remains an unsettled question. As the Replacements once put it, he seems to be telling us questions and asking us lies.

Telling Us Questions

The job apocalypse predicted by RethinkX isn’t absolute, apparently. According to the think tank, a narrow range of professional roles will endure, mostly those grounded in “human connection, trust, and ethical complexity” I’ve commented on this topic previously. In my view, Dorr is significantly underestimating the number of jobs that necessarily involve and cannot function without the intricacy and subtlety of uniquely human emotional intelligence.

There’s Always Sex Work

After all, the jobs he cites as survivors, in the sense that they will remain relevant professionals for humans, include sex work, sports coaches, politicians, and ethicists.

That’s it? Really? I’m just chipping at the tip of the iceberg when I say human sports coaches will probably oversee human athletes. I can’t see soccer, basketball, baseball, hockey, or football fans wanting to pay big bucks to watch robots compete on the field, court, or ice, especially if the video below is any indication of what they’ll witness. (At least the referee, though I’m not certain that’s a human job that will stand the test of time.)

Riveting Stuff -- Ahem

If you’re buying what RethinkX is selling, however, give the matter further thought. If you’re like me, you’re probably too old for sex work, aren’t really qualified to coach a sports team (especially one comprising robots), have no interest in politics, and wonder just how many employment opportunities are open to ethicists.

Superabundance or Bust

RethinkX believes that the upshot of all this job displacement by intelligent robots will be mass inequality or “superabundance.” If the RethinkX vision of the future comes to pass, I think we’ll see more of the former than the latter, at least initially. Previous technology revolutions were followed only much later by social reforms and government programs that adjusted to new realities.

The Business Insider article concludes by summarizing the views of various AI luminaries on what the future might hold. Spoiler alert: There’s not much agreement.

It’s a rewarding mental exercise to think about the future, and it’s even economically and socially beneficial for us to devise contingency plans for various scenarios, from the best case to the worst case. But we won’t know whether those contingency plans will need to be implemented until the future knocks insistently on our door. After we’ve opened the door, we’ll know even more.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe