No Easy Path to Effective AI Regulations

In his blog post, “Why AI Will Save the World,” Mark Andreessen flayed critics of AI development as Luddite simpletons and quaking “doomers,” naive victims of cunning schemes by industry giants pursuing “regulatory capture” and hammerlock domination of lucrative new markets.

But is that perspective accurate and justified? Are there no valid reasons to apply the brakes on certain types of AI development, to take a pause before forging ahead? Should venture-funded AI companies simply be left to their own devices in a libertarian wonderland of market magic, featuring the unerring prestidigitation of the “invisible hand”?

Many believe that we can’t afford to leave AI development exclusively to the rudimentary market dynamics of supply and demand. One individual with that view is Max Tegmark, a physicist and AI researcher at the Massachusetts Institute of Technology (MIT), who is also a cofounder of the Future of Life Institute, which recently issued an open letter calling for a pause in development of the advanced AI until researchers adopt a shared set of safety standards. Tegmark was one of about 33,000 signatories to the open letter, which included endorsements from many leading AI researchers and scientists.

Fast-Closing Window of Responsible Control

Tegmark’s AI concerns are featured in a Wall Street Journal article published on Friday, August 18. Tegmark does not quite fit the stereotype of tech-shunning Luddite or whimpering doomer. As the Wall Street Journal article notes, Tegmark has a longstanding track record as a committed technology optimist, having touted the potential of AI to help combat climate change, find cures for cancer, and potentially address other major challenges confronting humanity. Since then, however, he’s lost some of the AI faith, and is now profoundly concerned about a “dangerous race” among technology companies to build and commercially release systems that remain poorly understood and wholly unregulated. Put simply, Tegmark believes that AI’s window of responsible control is closing fast.

According to Tegmark, many AI professionals harbor private concerns about the accelerating trajectory of AI technology. Tegmark suggests that it is “taboo in certain circles” — likely throughout Silicon Valley, where everybody wants a piece of the next big thing— to suggest a prudent pause in AI development. Nobody wants to be called a Luddite or a doomer, and, as we’ve seen, that’s exactly the sort of disparagement reserved for anyone who attempts to call for any delay of the high-speed AI express train. Big and small industry players feel tremendous commercial pressure to innovate and get to market faster than their competitors.

Subtle Censorship and Compliance

At one point in the article, while talking about ongoing antagonism toward those who discuss the risks associated with AI, Tegmark says, “It was kind of like the lung cancer and smoking debate in 1954. You couldn’t even talk about it without being seen as a weirdo,”

What Tegmark describes is an insidious form of groupthink and censorship prevalent in the IT industry. It quashes internal resistance through a discreet but unmistakable imposition of compliance and conformity. Cowed by the threat of disfavor and ostracism, we find that we censor ourselves, against our better judgment and even against our own personal ethical reservations. This is particularly true when big money is on the line. Personal objections can become career-limiting moves.

I want to be fair, though: Greed isn’t the only factor driving the industry's ruthless pursuit of AI’s bounty. The impulses driving technology entrepreneurs and their financiers are at least twofold, and sometimes manifold. It’s a complicated world filled with complex people. In all cases, though, fear and greed are always present and often prominent, taking turns cracking the whip. Fear’s narrative pull is invariably accompanied by an ethical justification: “If I don’t do it, somebody else will. Those who put their ethical qualms aside and push relentlessly ahead will make a lot of money, while I’ll get nothing for my probity. What’s worse, the somebody else who takes the initiative might be based in China, and their triumph could result in an AI deficit that imperils the prosperity, defense, and even the viability of Western economies and societies.”

Unfortunately — and even putting aside the thorny issue of geopolitics —fear is right when it rationalizes that somebody else will take the initiative. In fact, with regard to AI, somebody else is doing it right now, in the U.S, in other parts of the Western world, and in China.

Can We Advance Responsibly?

Tegmark’s prescription for remedial action falls short in this regard. He’s quoted in the story saying, “The only way there will be a pause is if the U.S. government comes in and says, ‘Hey, you can’t release stuff that doesn’t meet these safety rules.” Later in the article, he proposes the creation of an AI regulatory body analogous to the Food and Drug Administration, capable of forcing companies to establish that their products are safe before being granted approval to release them to the public.

As we’ve already established, however, AI has a life beyond the U.S. Any practical regulatory controls would need to be implemented internationally, through global agreements, and those would take time to draft properly.  Further, any international agreement would need to be enforceable, encompassing provisions that ensure verification and compliance with terms and conditions. It’s a major undertaking, and, as Tegmark would doubtless agree, time isn’t on our side. Given the global environment in which AI exists, acknowledging the bristling Cold War 2.0 tensions that have come to predominate, perhaps it’s unlikely that we can fit AI with a globe-girding harness of effective and ubiquitous regulatory controls.

Tegmark himself understands that the tenor of the times is not conductive to cooperation. He also suggests that technology has already played a leading role in making humans more disagreeable, petulant, and combative. As he remarks in the Wall Street Journal article, “Algorithms don’t care about accuracy or nuance. They are trained to maximize engagement,” he says. “The result has been there’s so much more hate going around, not just within nations but between countries.” It troubles him that lawmakers are “so cozy with the companies they are supposed to regulate,” even enlisting tech CEOs to provide counsel on managing the industry.

Beyond Lobbies and Regulatory Capture

I suppose Tegmark Is alluding to “regulatory capture,” but it’s starkly different from the regulatory capture that Marc Andreessen had in mind in his “Why AI Will Save the World” missive. Andreessen was concerned that the regulators were being captured by industry giants who sought to achieve an oligopoly or monopoly over AI technologies, whereas Tegmark’s concern is that the lawmakers and regulators have been captured by those who are resolutely opposed to any form of regulatory control or restriction. These opposing views of “regulatory capture” align only in their implicit assumption that regulators are, in theory, critical gatekeepers. Are the gatekeepers influenced only by industry lobbyists, or are other voices, such as Tegmark’s and perhaps yours, being heard and considered? Lobbying is a major industry in the world’s national capitals, and in recent years, the technology giants have leveraged it extensively, becoming some of lobbyists’ most lucrative clients.

There is no simple solution to this conundrum, but an informed and engaged public might yet make a difference to how the future unfolds. Rather than promoting and hyping a ludicrous cage match between Musk and Zuckerberg, perhaps we should be encouraging a series of substantive public debates on responsible development and utilization of the next generation of AI technology.

Subscribe to Crepuscular Circus

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe