A military strike based on artificial intelligence is no longer pure science fiction, but a looming possibility.
Professor Ashley Deeks of the University of Virginia School of Law says in Episode 8 of the school’s “Common Law” podcast that it’s important for the United States to get ahead of technology that can have national security implications.
“Right now, I would say the law is lagging behind in both the international space and the domestic space,” Deeks said in the episode. “You often see that technology gets out in front of law. That’s not to say there are no legal rules that exist to regulate how states use these tools.”
But, she later added, “It might be more complicated to apply rules that were written in a pre-technological era.”
An expert on national security law who previously served in the U.S. State Department’s Office of the Legal Adviser and at the U.S. embassy in Baghdad, Deeks explored some of the potential downstream effects of new technology that may be used for, or otherwise influence, national security.
Predictive algorithms are one such technology. In the United States, algorithms are already being used to help police departments determine how and where to apply resources, and aiding judges in making sentencing and parole decisions – with the main goal of preventing future crime. Deeks said she believes the military is interested in using the technology to identify potential enemy combatants, as one application.
“So [military officials are] asking themselves questions similar to the ones that our police are asking in the criminal justice/law enforcement setting,” she said.
China has been stepping up the race to use new technology, in particular through its frequent use of video surveillance, which can be mined for algorithmic review, including facial-recognition data points. The U.S., too, has thousands of hours of video collected from drones.
Regarding the video technology, “I think we and the Chinese are right out in front,” she said.
Deeks and hosts Risa Goluboff and Leslie Kendrick – the Law School’s dean and vice dean, respectively – also discuss issues that could arise from these new possibilities, including “killer robots” that could act autonomously based on artificial intelligence programming, and look at avenues for addressing concerns, such as privacy issues.