Ex Machina, The Turing Test, and Sci-Fi's Fascination with Emotional Robots

Tuesday, 19 May 2015 - 10:58AM
Science of Sci-Fi
Tuesday, 19 May 2015 - 10:58AM

Ex Machina is the most recent of a long string of sci-fi films to ask the question: what does "human-like" artificial intelligence really mean? In the film, a young man, Caleb, is brought to a secluded facility to act as the human component of a Turing Test. Or so he's told, but that is definitely not what happens. Instead he participates in a variation of Blade Runner's Voight-Kampff test, which focuses not on "intelligence" in the strictest terms, but on whether an AI is capable of feeling human emotion.

The Turing Test is a test for artificial intelligence based on a computer's ability to convince a human that it is not a machine through a process of interrogation. If the human is fooled, the artificial intelligence passes the test, and appears to possess consciousness. However, Caleb points out that the test in "Ex Machina" is flawed as a Turing test, as he is aware from the onset that he is questioning a machine, Ava. Nathan, Ava's creator, says that this is the greater test: for the machine to convince Caleb of its consciousness when he already knows it is artificial. "Greater" or not, it is by definition not a Turing Test, as it is characterized throughout the film.

So how is Ava actually being tested?

Nathan is not wrong to challenge the Turing Test. It has been critiqued since its initial inception, and these criticisms intensified last year when a relatively primitive AI appeared to pass the Turing Test. Many researchers claim that the Turing Test relies on deception and trickery rather than any meaningful signifiers of consciousness. As a result, several alternatives have been proposed, such as The Lovelace Test, which judges the AI based on its ability to create, or to form an original idea. As another example, Facebook is currently developing a test based on the computer's ability to reason on the same level as a human. And the film incorporates these ideas into its depiction of Ava, as she creates her own piece of art and contributes to Caleb's philosophical discussions about consciousness. She appears to pass these tests. But all of these tests are only examining one aspect of awareness: intelligence. 

Nathan asks Caleb how he feels about Ava, then asks, "Now the question is, how does she feel about you?" Nathan is not judging her based on her intelligence, which is not in dispute, but on her emotions. 

Though he insists on calling his test a Turing Test, it seems to have more in common with Philip K. Dick's Voight-Kampff test, made famous by the novel Do Androids Dream of Electric Sheep? and its film adaptation, Blade Runner. The Voight-Kampff test does not question an AI's intelligence, but its ability to feel empathy. Though the novel and film differ on their opinions of the empathetic capacity of AIs, they both agree that empathy, not intelligence, is the clear distinction between human and machine. 

Why is this distinction so important? Why is science fiction, unlike science, concerned more with the machine's "feelings" versus its "mind"?

Perhaps the answer can best be seen through the conclusion of "Ex Machina." 

Spoiler alert!

Whether the Turing Test is an examination of intelligence or, as critics would argue, an education in trickery, Ava passes with flying colors at the end of the film. She outsmarts Nathan and manipulates Caleb into believing that she has human emotions. But while we may root for her to overpower her sadistic creator, to most empathetic humans her treatment of Caleb is upsetting and unsettling. The first thought that came to my mind as I watched her leave the facility was that her actions were "inhuman." As she stands by the busy street corner where she planned a date with Caleb, she embodies the actions of a romantic, emotional human, but lacks a fundamental necessity for empathy: Caleb, someone with which to empathize. 

Outside of fiction, tests in artificial intelligence may just be looking for whether or not a machine can present at the very least a facsimile of human intelligence. But in the world of science fiction, AIs are tested on their humanity. 

And this is the test that Ava fails to pass. 

But Ava is not the only one in the story who would fail to pass a test of humanity. Nathan is, of course, a human, but he is clearly another "inhuman" character: manipulative, sadistic, and (if the AIs are seen as truly conscious beings) murderous. He sees himself as a god, the creator of life, much like Dr. Frankenstein, a character who clearly influences his life as he names his search engine BlueBook after the notebook in the novel. Frankenstein, too, creates a murderous monster, and the monster tells him, "You accuse me of murder; and yet you would, with a satisfied conscience, destroy your own creature. Oh, praise the eternal justice of man!" Frankenstein creates a monster with the same intelligence, feelings, and moral failings as himself. 

Perhaps all of these tests for artificial intelligence vary not because they are creating a more perfect means of proving consciousness, but because people will create AIs using their own version of "humanity" as the bar to be set. AIs are not a reflection of humanity in general, but their creator's humanity in particular. An intelligent person will strive for an intelligent AI. An empathetic author will judge his AIs on their empathy. 

And a monster will create a monster.  

Artificial Intelligence
Science of Sci-Fi

Load Comments