Skip navigation
artificial intelligence

Amazon, Google Group Asks Regulators to Keep Their Hands Off AI

The Information Technology Industry Council released “AI Principles,” spelling out how governments should approach AI.

(Bloomberg) -- A lobbying group representing top artificial-intelligence companies including Inc., Facebook Inc. and Google issued a warning to lawmakers on Tuesday: hands off our algorithms.

The Information Technology Industry Council released “AI Principles,” spelling out how governments should approach AI, a technology that lets computers learn by themselves, and what the industry sees as its own responsibilities.

Government should “use caution before adopting new laws, regulations or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI,” according to an executive summary of the principles.

Big tech companies, and their software, are coming under more scrutiny in the wake of news that Russian-sponsored accounts used social networks to spread discord and try to influence the outcome of the 2016 U.S. presidential election. Algorithms designed by Facebook, Twitter Inc. and Google have also been criticized for increasing political polarization by giving people the type of news they already agree with, creating so-called “filter bubbles.”

“We are at the early stages of the commercialization of AI,” ITI President Dean Garfield said. “Given the reach of AI, we think it’s critical that society, governments, and the technology sector work together to begin to solve some of the most complex issues.” ITI presented the principles Tuesday at a conference in Washington hosted by a unit of Bloomberg LP, the parent company of Bloomberg News.

ITI stressed that governments shouldn’t ask companies to share or expose the code behind their AI systems. That touches on a key issue for the companies, which don’t want to have to explain the workings of AI systems that produce results their creators sometimes don’t fully understand themselves. In Europe, regulators have told tech companies they don’t mind how algorithms work, as long as they don’t break the law.

Facebook’s head of security, Alex Stamos, took to Twitter earlier this month to warn that the issue of fake news on social networks was more complicated than it’s often made out to be and trying to interfere with algorithms could have unintended consequences.

AI is one of the hottest fields of technological research, with Alphabet Inc.’s Google, Amazon, Facebook and Chinese internet giants like Baidu Inc. racing to develop computer programs that learn on their own. ITI estimates AI will add at least $7 trillion to the global economy by 2025.

Despite the promise, criticisms range from Elon Musk’s fears of AI being an existential threat to humanity to the problem of mostly male scientists and developers injecting their own biases into AI.

Tech companies have a responsibility to make sure the data and tools they use to develop AI mitigate bias, Garfield said, describing the principles as a commitment from industry to work on this problem.

The principles also acknowledge concerns that AI will disrupt labor markets, pushing more people out of work as computers learn to do tasks better than humans. Education, private-public partnerships and building AI programs that help people do their jobs should be priorities, the ITI report said.

"Any time you are driving innovation in a society that is transformative and as a result potentially uncomfortable, there are going to be points of tension," Garfield said. "The thing that we are trying to do is to make sure we’re at the center of trying to identify those points of tension and trying to resolve them. The bottom line is we hear the concerns that are being raised."

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.