AI Theorist Calls the Turing Test 'Speciesist'

Tuesday, 24 February 2015 - 1:34PM
Technology
Science of Sci-Fi
Tuesday, 24 February 2015 - 1:34PM
AI Theorist Calls the Turing Test 'Speciesist'

"Human-like intelligence" has often been used as a benchmark for the advancement of artificial intelligence, even though AI can already do many things that the human brain simply cannot. Now, sociologist and AI theorist Benjamin Bratton calls attention to the speciousness of this assumption in a new op-ed for the New York Times, in which he calls tests such as the Turing Test narcissistic and even "speciesist."

 

"A mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. That we would wish to define the very existence of A.I. in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism."

 

He discusses the ways in which popular culture, particularly movies, tend to propagate the idea that a computer's "true" intelligence can be measured by its similarity to a human, particularly its capacity to feel emotion: "The little boy robot in Steven Spielberg's 2001 film 'A.I. Artificial Intelligence' wants to be a real boy with all his little metal heart, while Skynet in the 'Terminator' movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke's 1968 film, '2001: A Space Odyssey,' want to talk to the human protagonist Dave, and not to his spaceship's A.I., HAL 9000."

 

Essentially, these films and others like them create an implied dichotomy, in which AI either wants to become human or eradicate humans. And indeed, several experts, including Stephen Hawking and Elon Musk, have made waves recently by claiming that AI presents a serious existential risk to humanity. Bratton doesn't agree, but rather argues that, in all likelihood, artificial intelligence wouldn't be particularly bothered with us. If they did pose a threat, it would only be the result of their superior intelligence and a moral indifference to us.

 

"Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all."

 

He then delved further into the anthropocentric nature of our assumption that "passing" for human is equivalent to intelligence. Computers have been able to beat us at chess for many years, they have greater capacity for memory and learning (in certain ways), and there's a whole host of feats that they can perform which a human brain cannot. And yet, when we measure its "intelligence," we are essentially measuring its similarity to a human. The fallaciousness of this equivalence was demonstrated when chatbot Eugene Goostman "beat" the Turing test. While some cited this event as "historic," in actuality Goostman only passed for a human as a result of unsophisticated trickery. While humans are obviously intelligent beings in many ways, it's unlikely that we are so intelligent that we should automatically set the curve.

 

As this is an example of extremely normative thinking, Bratton went so far as to draw a parallel between the Turing Test and Turing's own experiences with discrimination: "One notes the sour ironic correspondence between asking an A.I. to 'pass' the test in order to qualify as intelligent - to 'pass' as a human intelligence - with Turing's own need to hide his homosexuality and to 'pass' as a straight man. The demands of both bluffs are unnecessary and profoundly unfair."

 

"We would do better to presume that in our universe, "thinking" is much more diverse, even alien, than our own particular case. The real philosophical lessons of A.I. will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be (and for that matter, what being human can be)."

Science
Artificial Intelligence
Technology
Science of Sci-Fi

Load Comments