Microsoft Research Chief Claims AI Will Not End the Human Race
Countless films, including the upcoming Ex Machina, Avengers: Age of Ultron, and Terminator: Genisys, to name a few, have explored the idea of a malevolent artificial intelligence that turns against its creators once it achieves consciousness. But could this happen in real life? Some experts have theorized that artificial intelligence will, in fact, pose an existential threat in the near future, but Microsoft research chief Eric Horvitz has added a dissenting voice to the debate, claiming that while AI may achieve human-like consciousness, it will not attempt to exterminate the human race.
Horvitz acknowledged that artificial intelligence is growing exponentially "smarter," and that self-awareness may be on the horizon: "The notion that systems that can think, listen, hear, collect data from thousands of user experiences - and we synthesize it back to enhance its services over time - has come to the forefront now."
"The next if not last enduring competitive battlefield among major IT companies will be artificial intelligence," said Horvitz. "We have Cortana and Siri and Google Now setting up a competitive tournament for where's the best intelligent assistant going to come from... and that kind of competition is going to heat up the research and investment, and bring it more into the spotlight."
But unlike many other scientists, he does not believe that there is a significant danger that we will lose control over artificial intelligence systems, or that they will plot to murder us. Rather, he thinks that there are so many experts researching and monitoring these systems, it is more than likely that they will only benefit the human race.
"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences," he said. "I fundamentally don't think that's going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."
This viewpoint is in direct opposition to renowned physicist Stephen Hawking, who recently claimed that artificial intelligence may spell the end of the human race, as well as Elon Musk, who likened artificial intelligence research to "summoning the demon." Horvitz acknowledges that there may be dangers associated with AI, but he believes that they are more likely to be related to privacy concerns than existential ones. He also qualifies that artificial intelligence could provide a solution to privacy issues:
"We've been working with systems that can figure out exactly what information they would best need to provide the best service for a population of users, and at the same time then limit the [privacy] incursion on any particular user. You might be told, for example, in using this service you have a one in 10,000 chance of having a query ever looked at... each person only has to worry about as much as they worry about being hit by a bolt of lightning, it's so rare. So, I believe that machine learning, reasoning and AI more generally will be central in providing great tools for ensuring the privacy of folks at the same time as allowing services to acquire data anonymously or with only low probabilities of risk to any particular person."
I'm not qualified to say whether Horvitz is correct about AI's existential threat or lack thereof, but his conception of AI's "solution" to privacy concerns seems strange. It doesn't sound like solving privacy issues, but at best limiting them and at worst falsely convincing users that they don't exist. Artificial intelligence may be able to make people feel better about their chances of having their privacy violated, but that doesn't mean they can take privacy concerns away; in fact, that complacency might make the violations worse.