AI Whistleblowers Are Stepping Up. It’s About Time.

The death of some gag orders and a sophisticated safety team in Britain are encouraging signs that AI is being made more accountable.

Bloomberg News

June 17, 2024

4 Min Read
openAI CEO Sam Altman talks with Apple’s Eddy Cue during the Apple Worldwide Developers Conference (WWDC) on June 10
OpenAI CEO Sam Altman (L) talks with Apple’s Eddy Cue during the Apple Worldwide Developers Conference (WWDC) on June 10.Getty Images

(Bloomberg Opinion/Parmy Olson) -- Here's an AI advancement that should benefit all of us: It’s getting easier for builders of artificial intelligence to warn the world about the harms their algorithms can cause — from spreading misinformation and displacing jobs, to hallucinating and providing a new form of surveillance. But who can these would-be whistleblowers turn to? An encouraging shift toward better oversight is underway, thanks to changes in compensation policies, renewed momentum to speak out among engineers and the growing clout of a British government-backed safety group. 

The financial changes are the most consequential. AI workers suffer from the ultimate First World problem, in that they can make seven or eight figures in stock options if they stick it out with the right company for several years, and if they also keep quiet about its problems when they leave. Get caught speaking out, according to recent reporting by Vox, and they lose the chance to become millionaires. That has kept many of them silent, according to an open letter published this month by 13 former OpenAI and Google DeepMind employees, six of whom remained anonymous.

OpenAI’s response to such complaints has been encouraging. It not only apologized, but said it would free most of its past employees from those non-disparagement requirements. Daniel Kokotajlo, a former OpenAI employee who admirably refused to sign the gag order and stood to lose $1.7 million (the majority of his net worth, according to the New York Times), will now be able to liquidate his shares and get that money, his lawyer, Lawrence Lessig, tells me.

Related:Guidance Published To Help Developers Build Safer AI

The heartening development here isn’t that already-well-paid AI scientists are getting more money or protecting their lucrative careers, but that a powerful motivator for keeping silent is no more, at least at OpenAI. Lessig, who met with more than half a dozen former OpenAI employees earlier this year to hammer out a series of pledges that AI-building companies should make, wants at least one AI firm to agree to all of them. 

That’s probably a tall order. But decoupling non-disparagement agreements from compensation is a promising first step, and one that other Big Tech companies, who employee more than 33,000 AI-focused workers today, should follow if they don’t have such a policy in place already. Encouragingly, a spokeswoman for OpenAI-rival Anthropic says the company doesn’t have such controversial gag orders in place. 

Those companies should do more by making it easier for whistleblowers to sound the alarm. OpenAI has said it has a “hotline” available to its engineers, but that doesn’t mean much when the line only goes to company bosses.

Related:The Whistleblower Diaries: Don’t Blow Off The Disgruntled Employee

A better setup would be an online portal through which AI engineers can submit concerns to both their bosses and people outside the company who have the technical expertise to evaluate risks. Absent any official AI regulators, who should be that third party? There’s, of course, existing watchdogs like the US Federal Trade Commission and Department of Justice, but another option is Britain’s AI Safety Institute (AISI).

Bankrolled by the UK government, it’s the world’s only state-backed entity that has managed to secure agreements from eight of the world’s leading tech companies, including Alphabet Inc.’s Google, Microsoft Corp. and OpenAI, to safety test their AI models before and after they’re deployed to the public.

That makes Britain’s AISI the closest equivalent to weapons inspectors in the fast-moving field. So far, it has tested five AI models from several leading firms for national-security risks. 

The organization has 30 staff members and is in the process of setting up an office in San Francisco. It pays some senior researchers around £135,000 (about $170,000) a year, according to its latest jobs listings, far less than what a roughly equivalent role at Google’s headquarters in Mountain View, California would pay (more than $1 million in total compensation). Even so, the organization has managed to hire former directors of OpenAI and Google DeepMind. 

Related:AI Quiz 2024: Test Your AI Knowledge

It might seem awkward for Silicon Valley engineers to reach out to an organization overseas, but there’s no denying that the algorithms they’re fashioning have global reach. In the short term, the UK acts as a handy midpoint between the US and Europe, or even the US and China, to mediate concerns.

The mechanisms for whistleblowing still have some way to go in AI, but it’s at least a more viable option for the field than it ever was. That is a cause for celebration, and hopefully greater momentum for others to speak up too.

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like