You hear sirens and see flashing lights behind you. You know what’s to come—a speeding ticket—and instantly regret going over the posted limit. You pull over to the side of the road as the cop car pulls up behind you. But instead of a police officer exiting their car, you soon realize the car is the police officer.
Now primarily the stuff of action movies and sci-fi novels, robotic policing in the U.S. may be the new normal sooner than you think.
Last month news broke that Ford had filed a patent for a self-driving police car: a robot-controlled vehicle that would use artificial intelligence to issue speeding tickets from roadside hiding spots.
According to the patent, the vehicle would be able to detect infractions by other cars on the road, either by itself or working along with surveillance cameras. The police vehicle would then be able to wirelessly connect to the car in question to communicate with the driver and issue a warning or citation.
While we don’t yet know what additional policing powers this robotic police cars will have, it isn’t far-fetched to imagine a future with humanoid robot police officers chasing suspects on the street. In fact, it’s already happening in Dubai. (They have robot cops—they just can’t chase suspects, yet.) Many questions remain, from the extent of their mobility—can they go up stairs or cuff a suspect?—to whether they will have weaponry or have enough artificial intelligence to make judgement calls in a potentially life-threatening situation.
A&E True Crime spoke about the ethics and legality of robotic policing with Ryan Calo, a law professor at the University of Washington and a leading scholar in emerging technology and the law.
Legally speaking, is a driverless police car any different from a red-light camera or a satellite-issued speeding ticket?
Police need to have a policy in advance to justify how they use this, but not because of any intrinsically important legal reasons.
The public has an appetite for this conversation because we react to anything anthropomorphic (having human characteristics) in a visceral way. Drones are so visceral, as opposed to a stationary camera.
Imagine two scenarios: A police [car] accidentally runs into somebody. The next day we would expect that police would still be driving squad cars. Compare that to a situation where there is a robot police officer. Even if that robot police officer is being controlled or monitored…if there was an adverse event with that robot, we would not expect robots to be out in full force the next day.
Given that people might be viscerally opposed to driverless-police cars, why bother making them? What’s the benefit to the police departments?
For routine patrols, the value could be officers don’t need to concentrate on driving and can do other work and be more efficient, maybe more vigilant.
There are plenty of times when just the presence of officers is meant to deter illegal conduct. You find out a particular street in San Francisco has sex workers; you’re worried about that, and you have police patrol there more frequently. Imagine a world with a bunch of driverless cars and tinted windows. You never know when there’s an officer in there. It’s a way to deter activity.
What about expanding the use of robotics in the police force, more generally?
This is common sense, but people might be safer if there are more officers on the street, even if they’re robots.
If they can deter rape and burglary that’s a good thing for society. I don’t want officers to be in harm’s way when they don’t need to be. For a long time, robots have done the dull and dirty job people don’t want to be doing.
In 2016 a robot in Dallas was used to kill an armed suspect. What are the laws around police using robots that can kill?
There’s an international debate in the military context about lethal autonomous weapons, and there’s a general consensus…that there needs to be meaningful human control over weapons. But under U.S. law…generally it would have to be decided at the local level.
If a police robot is responsible for a wrongful death, who gets prosecuted?
Whoever is teleoperating it would clearly be responsible. If someone shoots someone with a gun, you don’t ask ‘Who is responsible: the person or the gun?’ Clearly the person.
That’s not to say teleoperation doesn’t complicate the analysis: A lot of times, police get a pass for the application of force because they are at risk or the public is at risk. They say, ‘I feared for my own safety.’ And if they’re not there, they don’t fear for their safety. So presumably, if an officer is not there, there should be a much higher bar for the use of force.
Now, what happens if the robot erroneously suggests that force would be appropriate, and says, “Just push here”? Then you could have an argument that it has to do with the design. But at the end of the day, if it’s truly wrongful, it’s the teleoperator.
How does the law deal with a situation where a robot goes rogue and kills someone without a human operator?
If we’re talking about civil liability, in that case it’s probably going to be a product-liability question. Robots kill about a person a year in factory settings. And when they do, it’s usually handled with workers’ compensation. No one says, “This robot did something bad.”
In criminal law, you have to intend what you do. With emergent behavior, sometimes robots—given the complexity of their interactions with the world—do stuff we don’t expect them to do.
Imagine there was a driverless car that was built as a hybrid, and was supposed to experiment with fuel efficiency. The car figures out that it has a better day with a full battery, so at night, it runs its engine in the garage to fill the battery. And that kills everyone in the house [with carbon monoxide.]
If you say to the manufacturers, ‘Your car killed a bunch of people,’ they’ll say ‘We did not foresee this category of harm.’ And that’s a plausible argument, but then who is responsible? You have this prospect of victims without perpetrators, and that creates complication in the law.
You’re talking about accidents in design. What about concerns around robots that, in their expanding intelligence, revolt against humans and begin actively fighting us? Is that a concern among those in your profession?
There is way too much emphasis on that, given the likelihood of it in the foreseeable future. I bemoan when we use too many of our resources investigating that question.
We ought to be emphasizing safety, fairness and transparency of these systems. Sometimes there’s an inverse relationship between what’s sexy and what’s plausible.
Related Features:
Real-Life Murders Inspired By Slender Man, ‘Robocop’ and Other Fictional Characters
Can We Predict the Next Mass Shooter?