A Look At The Morals of Artificial Intelligence in Sci-Fi and Real Life

Friday, 01 September 2017 - 12:48PM
Artificial Intelligence
Science of Sci-Fi
Friday, 01 September 2017 - 12:48PM
A Look At The Morals of Artificial Intelligence in Sci-Fi and Real Life
CBS / TriStar
Can we trust artificial intelligence to be moral? If it decides that humanity is a threat and needs to be destroyed, we can try to stop it, but should we admit it might have a point?

At the Escape Velocity convention in Washington DC, hosted by the Museum of Science Fiction, a panel called "Mechanical Morals: The Philosophy of Artificial Intelligence" took a look at moral and immoral AI throughout sci-fi history, and whether or not we can truly tell the difference. The panel was hosted by Tom Hyre of MAGfest, and featured Anastasia Klimchynskaya, Doug Samuelson, and Diane Samuelson.



Life That Didn't Ask to be Created - Frankenstein:

In a sense, Mary Shelley's Frankenstein in 1818 was a precursor to many modern depictions of AI in science fiction. Focusing on the novel and not the Boris Karloff movie, Dr. Frankenstein creates a living being (or at least a functioning undead one) declines to give it a name, and treats it like a monster and a threat to the people around him. 

So when Frankenstein's monster does go on a rampage and commits murders, he blames Victor for creating a monster by treating him like one. Which raises the question: if it was Dr. Frankenstein's fault that the Creature became bitter and threatening towards humans, how similar is it when AI turns against people?

Keeping the Peace by Destroying Humanity - Day the Earth Stood Still / Terminator / WarGames / 2001: A Space Odyssey:

They brought up Frankenstein because the narrative of "Human creates life that tries to kill humans" is a very common theme in sci-fi stories following AI. HAL 9000 in 2001 decides to kill his crew when their threats to shut him off could interfere with the mission; SkyNet concludes the humans will eventually shut it off, so it preemptively makes the first strike with its military expertise; Gort from The Day the Earth Stood Still is programmed to eliminate anyone who threatens the collective safety, which happens to be Earth after they discover atomic power.

In almost all of these works, AI plays a villainous role because of the cautionary steps it took toward humanity, and most of these authors/directors refuse to make humans completely innocent. Much of these works are steeped in guilt over the way humans have subjugated other humans throughout history, as the heroes try to struggle with the fact that humanity, as a whole, is finally seeing justice for the way it treats others. Still, in most of these works, the robots never succeed in the end, nor are they given much benefit of the doubt by the viewers.

We're starting to see the first part of this narrative in real life - employees at Facebook recently decided to shut off a series of AI programs after they discovered the programs created their own language to speak with each other. They hadn't gotten as far as anything legitimately dangerous, so when we show AI that we could turn it off for much less, will they learn to see humans as active threat?

via GIPHY



How Alive Are They? - Star Trek / Westworld / Black Mirror/RoboCop:

First introduced in his 1942 story "Runaround," Isaac Asimov's Three Laws of Robotics set the "rules" for how an AI could avoid destroying humans, although even that story saw Asimov showing how easily the robots could become confused and malfunctioning by following those laws too strictly.

Many, more optimistic types of sci-fi stories which paint artificially intelligent robots as heroic characters, tend to treat true "consciousness" as the moment a robot decides how to think outside those three laws, and make ethical decisions on their own merits. Anthony Hopkins' character in Westworld (spoilers) only tries to fight the androids in his park because he sees suffering as the final step in allowing a robot to achieve consciousness; and in Star Trek: The Next Generation, characters like Data are constantly wondering when he's finally thinking of his own free will (although he often sees emotion as being the key). 

In real life, it's not clear whether AI will ever be able to reach such a milestone, but it's possible. Human morals come from certain "laws" we set for ourselves, although we (usually) have the creativity to interpret those laws in different ways, something we haven't seen robots accomplish yet. But the difference between the organic computers in our skulls and the AI in our computers may simply be a matter of time, as they continue to learn.

via GIPHY

Science
Technology
Artificial Intelligence
Science of Sci-Fi
No