© 2024 WSKG

601 Gates Road
Vestal, NY 13850

217 N Aurora St
Ithaca, NY 14850

FCC LICENSE RENEWAL
FCC Public Files:
WSKG-FM · WSQX-FM · WSQG-FM · WSQE · WSQA · WSQC-FM · WSQN · WSKG-TV · WSKA
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

San Francisco considers allowing law enforcement robots to use lethal force

SUNRISE, FL - OCTOBER 24: The Broward Sheriff's Office bomb squad deploys a robotic vehicle to investigate a suspicious package in the building where Rep. Debbie Wasserman Schultz (D-FL) has an offce on October 24, 2018 in Sunrise, Florida. A number of suspicious packages arrived in the mail today intended for former President Barack Obama, Democratic presidential nominee Hillary Clinton and the New York office of CNN. (Photo by Joe Raedle/Getty Images)
SUNRISE, FL - OCTOBER 24: The Broward Sheriff's Office bomb squad deploys a robotic vehicle to investigate a suspicious package in the building where Rep. Debbie Wasserman Schultz (D-FL) has an offce on October 24, 2018 in Sunrise, Florida. A number of suspicious packages arrived in the mail today intended for former President Barack Obama, Democratic presidential nominee Hillary Clinton and the New York office of CNN. (Photo by Joe Raedle/Getty Images)
https://ondemand.npr.org/anon.npr-mp3/npr/atc/2022/11/20221128_atc_san_francisco_considers_allowing_law_enforcement_robots_to_use_lethal_force.mp3?orgId=1&topicId=1003&d=308&p=2&story=1139523832&ft=nprml&f=1001

Should robots working alongside law enforcement be used to deploy deadly force?

The San Francisco Board of Supervisors is weighing that question this week as they consider a policy proposal that would allow the San Francisco Police Department (SFPD) to use robots as a deadly force against a suspect.

A new California law became effective this year that requires every municipality in the state to list and define the authorized uses of all military-grade equipment in their local law enforcement agencies.

The original draft of SFPD's policy was silent on the matter of robots.

Aaron Peskin, a member of the city's Board of Supervisors, added a line to SFPD's original draft policy that stated, "Robots shall not be used as a Use of Force against any person."

The SFPD crossed out that sentence with a red line and returned the draft.

Their altered proposal outlines that "robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to the SFPD."

The SFPD currently has 12 functioning robots. They are remote controlled and typically used to gain situational awareness and survey specific areas officers may not be able to reach. They are also used to investigate and defuse potential bombs, or aide in hostage negotiations.

Peskin says much of the military-grade equipment sold to cities for police departments to use was issued by the federal government, but there's not a lot of regulation surrounding how robots are to be used. "It would be lovely if the federal government had instructions or guidance. Meanwhile, we are doing our best to get up to speed."

The idea of robots being legally allowed to kill has garnered some controversy. In October, a number of robotics companies – including Hyundai's Boston Dynamics – signed an open letter, saying that general purpose robots should not be weaponized.

Ryan Calo is a law and information science professor at the University of Washington and also studies robotics. He says he's long been concerned about the increasing militarization of police forces, but that police units across the country might be attracted to utilizing robots because "it permits officers to incapacitate a dangerous individual without putting themselves in harm's way."

Robots could also keep suspects safe too, Calo points out. When officers use lethal force at their own discretion, often the justification is that the officer felt unsafe and perceived a threat. But he notes, "you send robots into a situation and there just isn't any reason to use lethal force because no one is actually endangered."

The first time a robot was reported being used by law enforcement as a deadly force in the United States was in 2016 when the Dallas Police Department used a bomb-disposal robot armed with an explosive device to kill a suspect who had shot and killed five police officers.

In an email statement to NPR, SFPD public information officer Allison Maxie wrote, "the SFPD does not own or operate robots outfitted with lethal force options and the Department has no plans to outfit robots with any type of firearm." Though robots can potentially be equipped with explosive charges to breach certain structures, they would only be used in extreme circumstances. The statement continued, "No policy can anticipate every conceivable situation or exceptional circumstance which officers may face. The SFPD must be prepared, and have the ability, to respond proportionally."

Paul Scharre is author of the book Army Of None: Autonomous Weapons And The Future Of War. He helped create the U.S. policy for autonomous weapons used in war.

Scharre notes there is an important distinction between how robots are used in the military versus law enforcement. For one, robots used by law enforcement are not autonomous, meaning they are still controlled by a human.

"For the military, they're used in combat against an enemy and the purpose of that is to kill the enemy. That is not and should not be the purpose for police forces," Scharre says. "They're there to protect citizens, and there may be situations where they need to use deadly force, but those should be absolutely a last resort."

What is concerning about SFPD's proposal, Scharre says, is that it doesn't seem to be well thought out.

"Once you've authorized this kind of use, it can be very hard to walk that back." He says that this proposal sets up a false choice between using a robot for deadly force or putting law enforcement officers at risk. Scharre suggests that robots could instead be sent in with a non-lethal weapon to incapacitate a person without endangering officers.

As someone who studies robotics, Ryan Calo says that the idea of 'killer robots' is a launchpad for a bigger discussion about our relationship to technology and AI.

When it comes to robots being out in the field, Calo thinks about what happens if the technology fails and a robot accidentally kills or injures a person.

"It becomes very difficult to disentangle who is responsible. Is it the people using the technology? Is it the people that design the technology?" Calo asks.

With people, we can unpack the social and cultural dynamics of a situation, something you can't do with a robot.

"They feel like entities to us in a way that other technology doesn't," Calo says. "And so when you have a robot in the mix, all of a sudden not only do you have this question about who is responsible, which humans, you also have this strong sense that the robot is a participant."

Even if robots could be used to keep humans safe, Calo raises one more question: "We have to ask ourselves do we want to be in a society where police kill people with robots? It feels so deeply dehumanizing and militaristic."

The San Francisco Board of Supervisors meets Tuesday to discuss how robots could be used by the SFPD.

This story has been updated to include portions of an email statement to NPR by the SFPD. Copyright 2022 NPR. To see more, visit https://www.npr.org.

Transcript :

ARI SHAPIRO, HOST:

Lethal robots will be on the agenda tomorrow at a meeting of the San Francisco Board of Supervisors. The question is whether the city's police department can use robots to kill people.

Professor Ryan Calo studies robotics and law at the University of Washington. Welcome back to ALL THINGS CONSIDERED.

RYAN CALO: Glad to be here.

SHAPIRO: The Dallas Police Department used a robot to kill a suspect back in 2016, so this is not unheard of. What kinds of scenarios are we talking about here?

CALO: Typically, the way that the police use robots is either to gain situational awareness using a drone in the air or else to investigate a suspicious object that could be a bomb or to negotiate with a suspect in a hostage situation. There's been almost no instances of any violence at all through robots except for Dallas. But there have been multiple instances when people have shot robots. So people in hostage situations have unloaded shotguns on police robots before and shot robots out of the air. And so there is a - the history of violence in robots goes human to robot, and the idea that the robots would be able to fire back is disturbing.

SHAPIRO: And so do you think San Francisco having this debate is letting science fiction leach into reality, or is it a helpful setting of rules before we get to a place where people are confronted with a situation there may not be rules for?

CALO: I do think that police departments that have robots should have policies in place talking about how the robots can and can't be used. I think that's healthy. Often the way that police shootings are justified or attempted to be justified is by reference to the officer's safety. So the dream with robotics is that if nonlethal force, for example, could incapacitate a suspect without the officer ever feeling threatened, then there would be less reason for police to use force. But in actual fact, it is very difficult to gain enough understanding of a situation from a distance with a robot to know when to use force and when not to.

SHAPIRO: So you say it would be good for police departments to have standards to follow. Let's talk about what that standard should be. In San Francisco, the initial policy draft prohibited use of robots to deploy deadly force. Now, the current draft policy says, quote, "robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD." What do you think of that as a standard?

CALO: First of all, I think it's good for the police department to set standards in advance. And second, I'm glad to see that the standard is so narrow, that you really have to be out of other options. Still, you worry about whether there'll be any situation where the public would feel comfortable with police using lethal force through a robot. You know, even if we thought that there might be some scenarios where our best option is to incapacitate a suspect through deadly force through a robot, it feels so deeply dehumanizing and militaristic. And so, you know, the very prospect of a robot being able to kill someone could be something that the people of San Francisco won't tolerate, even if, you know, there are some very narrow circumstances where it's sort of the best option from a tactical perspective.

SHAPIRO: And then there's also the question of accountability. If there's a scenario where a robot hurts or kills the wrong person, who gets put on trial for that?

CALO: Yeah. I mean, it's so interesting the way in which technology, and especially robotics, makes a kind of shell game of responsibility, right? You don't know who is responsible. Is it the officer operating the robot? Is it the people that made the robot? Sometimes it feels like it's the robot itself, right? I mean, if there were an incident where an officer hit someone with their car or shot somebody, we would expect that the very next day, there would be police cars and guns on the streets. They're just equipment. But when a robot is involved in violence, we would expect the whole robotics program to be suspended. Why is that?

And it has to do with the fact that robots feel different to us. We associate them with science fiction, and we're deeply uncomfortable with them being armed. And if you couple that with the racialized, often untrusting and uncomfortable policing environment we have today, that's not a good mix, right? I mean, technologies that we don't totally understand or trust in the hands of a police force that is still reckoning with century of racial violence, it's not a comfortable combination.

SHAPIRO: That's Ryan Calo. He's a law and information science professor at the University of Washington. Thanks a lot.

CALO: Thank you, Ari.

MARY LOUISE KELLY, HOST:

After we taped this conversation, a spokesperson for the San Francisco Police Department wrote us to say, quote, "no policy can anticipate every conceivable situation or exceptional circumstance which officers may face. The SFPD must be prepared and have the ability to respond proportionately." Transcript provided by NPR, Copyright NPR.