(Bloomberg) -- The Pentagon’s new head of artificial intelligence wants to speed up technological modernization after an onslaught of what he calls “valid” criticism from recently departed senior leaders who expressed frustration at slow progress.
Craig Martell, who was previously head of machine learning at Lyft Inc. and Dropbox Inc. and led AI initiatives at LinkedIn, told Bloomberg News in his first interview since starting his job as the Pentagon’s chief digital and artificial intelligence officer that he wanted to make progress despite the department’s labyrinthine “bureaucratic inertia.”
Martell’s arrival is a boost for the Pentagon, which is seeking to attract expert talent from the private sector. Martell, who said his first day at the Pentagon on Monday was “overwhelming,” added he had taken a “not trivial” pay cut to do the job.
“It’s not my goal to come in here and change the entire culture of the DoD. It’s my goal to demonstrate that with the right cultural changes, we can have really big impact,” he said, adding that the opinions of senior leaders who had left were “mostly correct.”
Several senior defense officials working on technological modernization have recently left the Department of Defense and expressed frustration at the slow pace of change, with some citing concerns that China – which researchers say is heavily investing in artificial intelligence for the military – could overtake US defense capabilities.
Nicolas Chaillan resigned his job as chief software officer for the Air Force last year and derided military leaders who he claimed were ill-equipped to spearhead technological change needed to outpace China. Earlier this year, three Pentagon officials left their jobs and urged the department to accelerate efforts to adopt cutting-edge digital technology: David Spirk, the chief data officer; Preston Dunlap, the Air Force’s chief architect; and Jason Weiss, the Defense Department’s chief software officer.
Spirk told Bloomberg on Monday that the new team spearheaded by Martell, along with the reorganization of the Pentagon office he now leads, was “exactly what the Department of Defense needs.”
The Pentagon has previously run into ethical concerns over its potential use of AI. For instance, some Google employees refused to work on potential military applications for artificial intelligence, amid concerns over potential future use of autonomous lethal weapons and targeting. Martell’s responsibilities include work on “algorithmic warfare,” an under-defined concept that seeks to apply artificial intelligence to combat.
In a paper for the Center for Strategic and International Studies published on Monday, Gregory Allen, formerly head of strategy and policy for AI at the Pentagon until he stepped down in April, said the department needed to update its nearly 10-year-old policy about the use of autonomous weapon systems and address the role of AI-enabled systems and machine learning. Despite regular rhetoric from senior defense officials that there would always be a “human in the loop” in any decision to fire a weapon, there is no explicit ban on autonomous weapons or written requirement that humans must authorize weapons engagement, he said.
Martell said he was attracted to the role because his remit was “responsible AI.”
“For me, whenever there are lives on the line, humans should be in the loop,” he said, adding the Pentagon needed to have robust ethical guidelines for the use of artificial intelligence in warfare and to ensure that machines would be 99.999% correct before any were deployed. His office will feed into an update of the department’s policy on autonomy in weapons systems.
But he said he didn’t believe “we’re ever gonna get to Skynet,” a reference to a future in which advanced AI could threaten humans as was portrayed in the Terminator movie series.