Skip navigation
AI written in front of code Alamy

An AI 'Pause' Would Be a Disaster for Innovation

Like every worthwhile technology, artificial intelligence poses risks. That's no reason to stop progress.

(Bloomberg Opinion) — At least since Aeschylus, humanity has been warning itself about the dangers of unbounded technology. Last week, more than 1,000 researchers and executives added to this canon with an open letter calling for a pause in research on artificial intelligence.

It's a sobering read. The letter warns that AI may (among other things) imperil jobs, spread propaganda, undermine civil discourse, even lead to the "loss of control of our civilization." It calls for a six-month moratorium on advanced research in the field and proposes that industry leaders come up with safety protocols and governance systems to rein in potential risks.

Many of the signatories are thoughtful and experienced AI practitioners. Their concerns should be taken seriously. On balance, though, their approach seems likely to do more harm than good.

The problem is not with the "pause" per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn't do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it's hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

Consider the broader worldview expressed in this document. The signatories call for "new and capable regulatory authorities," a "robust auditing and certification ecosystem," "well-resourced institutions for coping with the dramatic economic and political disruptions" AI may cause, and more. They add: "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it's home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It's no accident that all the leading AI efforts are American too.

Slowing AI progress, moreover, carries risks all its own. For all the doom and gloom, don't forget that this technology is likely to make the world richer, healthier, smarter and more productive for decades to come. By 2030, it might be contributing more than $15 trillion to the global economy. Advances in medicine, biology, climate science, education, business processes, manufacturing, customer service, transportation and much more are on the horizon. Any new rules must be balanced against the vast potential of these efforts.

Nor is AI research advancing into a void. The industry already operates within legal parameters — liability regimes, consumer-protection laws, torts and so on — that are responsive to potential harms. Companies have every incentive to ensure their products are safe. Trade associations are developing codes of conduct and ethical frameworks. Far from the "out-of-control race" alleged by the letter's signatories, the AI business is constrained by law and politics and consumer sentiment just like any other.

That's not to say potential dangers should be ignored. But rather than trying to anticipate every risk, regulators should let entrepreneurship flourish while efforts to monitor and improve AI safety proceed in parallel. Governments should fund research into AI risks and publish best practices; the Artificial Intelligence Risk Management Framework produced by the National Institute of Standards and Technology is an exemplar. Lawmakers should ensure that companies are transparent and consumers are protected, while being alert to any novel threats.

It's natural to worry about new technologies. But the wealth and abundance of American society is due in no small part to risks taken in the past, in a spirit of openness and optimism. The AI revolution deserves no less.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish