AI Expert Claims Chappie's Robot Police Force May Be Close to a Reality
The titular self-aware robot in Neill Blomkamp's Chappie has been getting all of the attention in the media lately, with AI experts attempting to answer the age-old question of whether robots can achieve true consciousness. But for all that Chappie ultimately lost interest in social commentary, the most impressively grounded aspect of the film's fictional world was the casual manner in which humanity allowed robots to take over their police force. But are we close to forming all-robot police forces in reality? AI expert and associate professor from the University of Arizona Dr. Wolfgang Fink claims that while Chappie himself may be a long way off, the equally logical and terrifying prospect of an automated police force probably isn't.
First, Dr. Fink clarifies that technically, the term "artificial intelligence" has nothing to do with autonomy or self-awareness. Speaking to Blastr, he said, "From a scientific point of view, A.I. happens to be a technical term, and only describes systems that are rule based: if you encounter different situations and react [in different] ways. If you have many of these rules, it looks like the system is intelligent."
Dr. Fink is of the opinion that we are nowhere near self-aware, sentient robots like Chappie himself, but are very close to the relatively simple police robot that preceded him, primarily because they're "rules based. They are not self aware, nor are they situationally aware. If [these robots] encounter certain situations, they know how to react to it. They pull a gun, things like that."
At first blush, this seems like a great idea, as a robotic police force would keep flesh-and-blood police officers out of harm's way. Furthermore, robots wouldn't suffer from the same prejudice and biases as humans, so they could arguably enforce the law in a more egalitarian way. In the post-Ferguson landscape, this is no small point in the robots' favor.
However, Dr. Fink clarified that there are many problems inherent to the idea of a robotic police force: "The problem is, with A.I., if you encounter something for which you do not have a rule and did not anticipate, you essentially do not know how to react."
"[I]f there's a shootout and some civilians walk by in the line of fire, a human police officer would alter the course of action and consider the new situation that just arose. But the A.I. system would just follow the protocol and just keep firing. That's a problem, there."
So although there are many potential benefits to automating police officers' jobs, there are also many risks. It seems that the answer will likely be some kind of compromise, so hopefully we won't just blithely accept the notion when robot manufacturers manage to create machines that are capable of performing this function.