Mechanical eye

Artificial Intelligence Ethics: Google’s Pentagon AI Move Highlights Brewing Issues

As the application of AI technology rapidly evolves, artificial intelligence ethics will only become more complicated.

Google was once famous for the motto "Don’t be evil." The definition of evil varies from person to person, of course, but many members of the tech giant’s artificial intelligence staff seemed to think that work for the Pentagon fit the bill. Thousands of Google employees protested the Pentagon project, amid artificial intelligence ethics concerns that such work could lead toward the use of AI in advanced weaponry. 

Google employees' concerns about Project Maven have been allayed--at least for now: During a weekly team meeting on June 1, the company announced that its artificial intelligence contract with the Pentagon wouldn’t be renewed when it expires next year. However, this is one of what will undoubtedly be countless debates over artificial intelligence ethics.

After the announcement about the Pentagon's Project Maven--which uses AI to interpret video images, a capability that could be applied to drone strikes--Google CEO Sundar Pichai published a list of the company’s AI principles, including its beliefs that AI should be socially beneficial, that it should avoid creating or reinforcing unfair bias, and that it should have accountability.

But the topic of artificial intelligence ethics--and debate within companies over what is and isn't OK--will come up again and again. 

The reality is that most tech companies of any significance perform work within and for the defense community--directly or indirectly. That’s not an inherently bad thing, said Gabe Batstone, CEO of Contextere, which works with Lockheed Martin on the C130J program, providing indirect support of the Department of Defense. But there are certain ethical concerns to consider in doing that work, such as the balance of privacy versus security and the potential for embedded bias in automated systems.

“The military in many ways is a microcosm of society as they deliver much more than weapons or policy by other means,” Batstone said. “They are involved in education, healthcare, construction and engineering in an environment where professionalism and oversight are far more developed than their companion environments in the commercial sector.”

Balancing Ethics and AI Development

Ethics are not a new concern for artificial intelligence, even if the conversation around tech ethics does seem to have grown quickly thanks to the Cambridge Analytica and Facebook scandals.

“Ethical considerations and the societal effects of technology products are becoming more and more important in AI, as well as in the tech industry as a whole,” said Briana Brownell, founder and CEO of PureStrategy.ai. “There are ongoing efforts to improve standardization within the AI technology industry, especially as it relates to ensuring public trust in the technology.”

This focus on AI and ethics has dual effects, said David Tal, the president of Quantumrun Forecasting.

“On one hand, it's an important reminder to AI professionals about the responsibility they have when researching such a disruptive field,” Tal said. “But on the other hand, it has a chilling effect on the military's ability to recruit talented AI professionals to work on legitimate, AI-related national defense initiatives.”

A Shifting Industry, with Shifting Moral Guidelines

It’s worth considering that AI and machine learning tech are changing rapidly right now, Brownell said, and we’re just now at the point where we’re seeing major effects that couldn’t be anticipated when the tech was originally built.

“Understanding the ways in which these technologies have the potential to cause harm is very important, but it's an area that at the moment gets very little thought,” Brownell said. “It’s a conversation that is just beginning, but I predict that it will become increasingly important.”

There are positive ways to leverage defense work with the rapidly changing nature of the industry--for example, in testing emergent technology, Batstone said.

“In my experience, there are few better places to explore and test the impacts of emergent technology than the defense community,” Batstone said. “As a group they are focused on delivering outcomes, have standard tactics, techniques and procedures (TTP), and are genuinely interested in understanding not only the benefits of new capabilities but also of tracking unintended consequences.”

Even if the conversation is young, it has never been more important, Park said.

“With AI, personal data and business ethics all in ascendence, the need for ethical IT policies has never been greater,” Park said. “Otherwise, companies risk building services and products that fall short of the ethics and trust that they have been given by employees.”

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish