On March 26, Google announced its Advanced Technology External Advisory Council--an advisory artificial intelligence ethics board. On April 4, the tech giant made another announcement, this time shutting the board down after a week of scrutiny, petitions and resignations. The short life of the artificial intelligence ethics board is another example of how organizations are wrestling with the ethical considerations of a technology area that is changing quickly, in ways that often go beyond the comfort levels of some tech employees, members of the public, government officials and researchers.
The ill-fated artificial intelligence ethics board, announced less than a year after the company released its ethical AI charter, was founded to guide responsible AI development at the company, Google said. The eight members of the board were to meet four times in 2019 to discuss the tech giant’s AI program and address concerns.
“This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work,” Google said in its announcement.
But the ethics board quickly came apart at the seams. Within a week, privacy researcher Alessandro Acquisti announced via Twitter that he would not serve on the board. Two others named to the AI board--Heritage Foundation president Kay Coles James and drone company CEO Dyan Gibbens--inspired petitions demanding their removal.
The petition calling to cut James from the board was signed by thousands of Google’s own employees, Vox reported, 1,800 of whom signed an open letter opposing James’ position on trans rights. There were also concerns about the inclusion of Gibbens, whose company, Trumbull Unmanned, has worked with the U.S. military; her inclusion on the AI board resurfaced an earlier ethical dilemma Google has faced in the recent past, when it was revealed that the company had worked with the military on drone technology.
Lessons for Smaller Enterprises
For some the situation was an illustration of the problems with these boards in the first place. Google’s planned AI ethics board wasn’t going to have real decision-making power; it would have been advisory, not prescriptive, which led some to charge that the board would be a PR exercise more than anything else. At the same time, Google does still have its Advanced Technology Review Council, which was formed last year and includes people highly placed in the company.
The field of artificial intelligence is changing rapidly, which means that Google’s plans in the space are evolving all the time. It’s not inconceivable that in the three months between a board’s meeting, the scope or direction of one or another of the company’s AI projects--which are themselves varied and significant--could have shifted considerably. It’s a big ask for members of an unpaid board to not only keep up with the developments in a highly technical field like AI for one of the world’s tech giants, but to also advise on the ethics of those developments.
Google is no ordinary company--it is one of the world’s Big Four tech companies, with nearly 100,000 employees. But Google’s experience here holds lessons for smaller enterprises, as well.
There is a lot of excitement about the potential of artificial intelligence, but there is also a lot of wariness. A January 2019 report from the Center for the Governance of AI at Oxford found that American support for AI development is mixed, and a significant majority believe its development should be carefully managed. Even the administration of President Donald Trump, who has shown a willingness to break from international cooperation on other matters, has resumed working with the Organization for Economic Cooperation and Development on international guidelines for AI development and use. And Yoshua Bengio, one of three AI researchers who recently won the Turing award, is working on a set of ethical AI guidelines.
These larger efforts focused on ethics and guidelines for AI will trickle down the chain. Organizations that are interested in making AI part of their operations--and the numbers are growing--need to consider the concerns in implementing or operating these technologies. Those potential concerns are wide ranging, from the ethical use of the technology itself to the storage and security needs of the data used to power machine learning and artificial intelligence. As the reaction from Google’s own workforce shows, employees are paying attention.