Police Officers Turn to Robocops for Moral Guidance

Friday, 13 June 2014 - 10:40AM
Artificial Intelligence
Friday, 13 June 2014 - 10:40AM

Law enforcement has enlisted the help of computers to ensure that they respect suspects' privacy, under the theory that computers are more objective than humans, and therefore fairer. But research shows that computers may be just as ethically fickle as humans.


Nine years ago, federal agents tracked the car of a man they suspected of dealing drugs without a warrant. After four weeks of GPS tracking, they garnered enough evidence to prove their suspicion correct. But when the man in question appealed to the Supreme Court based on a violation of his fourth amendment right to privacy, the case was thrown out on the grounds that four weeks was too long to surveil a suspect without a warrant. The justices debated the length of time that was appropriate to track suspects for routine investigations before the tracking constituted a violation of privacy, but couldn't come to a satisfactory answer. Justice Scalia stated in United States v. Jones, "it remains unexplained why a 4-week investigation is 'surely' too long."


The answer may come from technology, according to Steven Bellovin of Columbia University. Computers may make for more ethical cops, as they can use algorithms to analyze nuanced patterns in data, which could allow them to yield an answer regarding how much surveillance is too much.


"Some justices think four weeks is too much and they've never been able to explain why," said Bellovin. "I saw there was a natural way to answer some of the questions by using these techniques."


Using this algorithm, Bellovin and his team analyzed tracking data in order to determine how much information can be gleaned about an individual during a certain amount of time. The algorithm concluded that one week of composite analysis will yield enough personal information about an individual to constitute a violation of privacy. Essentially, the researchers measured the level of accuracy with which the data points gathered during a certain amount of time could predict future behavior. They were able to clearly quantify that the longer a person was surveilled, the more accurate the predictions became. Although this seems somewhat intuitive, it is the first concrete evidence that an investigation that lasts four weeks is more of a violation of privacy than an investigation that lasts for four days. 


But Harry Surden at the University of Colorado warns that the machines' supposed "objectivity" could be an illusion. Two different algorithms could come up with different analyses of the same data. (Although not perfectly analogous, this reminds me of the definition of artificial intelligence in "Terminator: Sarah Connor Chronicles," in which they posited that a machine was sentient as soon as it started yielding different answers from the same data.) As the algorithms are created by people, there could still be elements of subjectivity, particularly in deciding which factors the algorithms will focus on and tout as "important." "We have to be very cautious to not overly endow things that look technological and mathematical as objective."


The Turk, the earliest version of Skynet, and the first machine to achieve sentience in "Sarah Connor Chronicles":


Credit: Fox


There are several other obstacles towards hailing this algorithm as a perfectly objective and fair way to determine "how much is too much." Although the computers are using the algorithms to unearth the patterns, the patterns necessarily need to be interpreted by a human, since the entire concept of privacy (not to mention the level of importance of privacy when weighed against protection) is subjective. And while this study does seem to prove that, according to certain criteria, a longer investigation constitutes "more" of a violation of privacy, drawing a definitive line between an acceptable violation and "too much" violation seems at least somewhat arbitrary. However, this study poses necessary questions that should be part of the larger societal conversation about privacy in the electronic age; as Bellovin et al put it, when deciding the limits on surveillance technology, we "should take into account not only the data being collected but also the foreseeable improvements in the machine learning technology that will ultimately be brought to bear on it."

Artificial Intelligence

Load Comments