Q. First, what is your reaction to San Francisco’s decision?
A. I’m surprised to see this coming out of San Francisco, which generally is a very liberal jurisdiction, rather than out of a city that is known as being “tough on crime!” But it’s also important to understand what capabilities San Francisco’s robots have and don’t have. These are not autonomous systems that have could independently select who to use force against. Police officers will be operating them, even if remotely and from some distance. So calling them “killer robots” could be a little misleading.
Q. When a police officer uses deadly force, he or she is ultimately responsible for the decision. What complications might arise, legally, if a robot does that actual killing?
A. According to the Washington Post, the San Francisco Police Department doesn’t plan to equip its robots with firearms in the future. Instead, this policy seems to envision a situation in which the police could equip a robot with something like an explosive, a taser or a smoke grenade. San Francisco’s policy would still involve a human “in the loop,” since a human would be remotely piloting the robot, controlling where it goes, and deciding whether and when the robot should ignite explosives or incapacitate a suspect. So the link between a human decision-maker and the use of deadly force would still be easy to identify.
Where it could get more complicated is if the robot stops working as intended and accidentally harms someone through no fault of the operator. If the victim or her family sues, there could be issues about whether to hold the manufacturer, the police department or both responsible. But that isn’t such a different question from what happens when a police officer’s gun accidentally fires and injures someone due to a manufacturing flaw.
Q. Aside from the legal questions, what are the ethical questions society will have to face when robots take life? Or are the legal and ethical questions intertwined?
A. The legal and ethical questions are related. Ideally, the legal rules that states and localities enact will reflect careful thinking about ethics, as well as the Constitution, federal and state laws, and smart policy choices. On one side of the balance are the considerable benefits that come from tools that help protect police officers and innocent citizens from harm. Since many uses of deadly force happen because officers fear for their lives, properly regulated and carefully used robots could reduce the use of deadly force because they could reduce the number of situations in which officers are at risk.
On the other side are concerns about rendering police departments more willing to use force, even when it’s not a last resort; about accidents that could arise if the robotic systems aren’t carefully tested or the users aren’t well trained; and about whether the use of robots in this way somehow this cracks open the door to the future use of systems that have more autonomy in law enforcement decision-making.
One novel question that could arise is whether police departments should establish more cautious use-of-force policies when it’s a robot delivering that force, because the robot itself can’t be killed or harmed by the suspect. In other words, we may not want to allow robots to use force to defend themselves.