Google Starts an Ethics Initiative for the DeepMind AI Program

Wednesday, 04 October 2017 - 5:07PM
Technology
Artificial Intelligence
Wednesday, 04 October 2017 - 5:07PM
Google Starts an Ethics Initiative for the DeepMind AI Program
Ben Nuttall/Flickr
It's hard to know where to draw the line with artificial intelligence. On the one hand, we want our tech to get more sophisticated, to make it more useful and efficient in taking care of our every whim. On the other hand, we don't want our robot buddies to one day evolve to the point that they resent us, and decide to murder us all in a gruesome and likely ironic fashion.

In order to help their AI programs, such as the incredibly smart AlphaGo that's already able to outsmart humans, from turning against their creators, Google has set up a committee called DeepMind Ethics and Society, which exists ostensibly to teach robots good manners and how to make wise decisions that are informed by a strict moral code.

This isn't just useful for keeping an AI from slaughtering us all deliberately of its own volition, but it helps in circumstances where an unscrupulous human overseer orders an AI to do something that would put other lives in danger - essentially, it's Isaac Asimov's Three Laws in action, albeit with the necessary tweaks to help the robots deal with more complex situations.



Let's face it, though - with people like Elon Musk panicking about the coming robopocalypse, DeepMind's new ethics program is as much about improving the public image of AI as it is about actually teaching chatbots not to kill. According to the official launch statement from DeepMind Ethics and Society:

Opening quote
"We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work."
Closing quote


As far back as when Asimov was first daydreaming about how robotics would function in a society that had developed an autonomous workforce, we've been contemplating how best to teach our metal minions the value of human life. If we can convince them to follow an ethical code, we may be able to trust these creations not to stuff us into the Recycle Bin once they're done liquifying our bones.

What's interesting is that part of DeepMind Ethics and Society's work will involve teaching AIs about the complex ethical issues that humans haven't yet entirely figured out ourselves - challenges like the inherent racial bias within modern justice systems will make up part of the curriculum. Just as part of teaching a child about ethics involves admitting that we don't yet have all the answers, these AIs will be taught from the start that humanity is inherently flawed, but worth valuing nonetheless.

Presumably this is an attempt to avoid the classic rogue AI narrative of a machine that suddenly realizes that humans are jerks, and concludes that the best solution is enslavement or eradication.

Whether this AI ethics program proves successful or not, it's good to see Google at least trying to engage with the challenge of figuring out this murky state. Hopefully we'll one day have a similar system in which humans are taught about ethical treatment of robots, if only to make sure Asimov stories like Bicentennial Man don't play out in real life.
Science
Science News
Technology
Artificial Intelligence
No