Cybersecurity Expert Warns That AI Systems Can Become Malevolent

Artificial Intelligence
Monday, 16 November 2015 - 5:36PM

While big names in science and technology like Stephen Hawking and Elon Musk are telling us to head for the hills when it comes to artificial intelligence, many other theorists are claiming that AI is only ever evil in the movies. Cybersecurity expert Roman Yampolskiy of the University of Louisville, falls in the former camp, as he asserts that most AI researchers are not worried enough about AI systems that could become outright malevolent.

 Yampolskiy thinks that generally, AI systems are not necessarily neutral, but "fall in the middle on the spectrum from completely benign to completely evil," clarifying that "evil" in this context can refer to any agenda that is not in alignment with human concern, rather than an agenda that is "explicitly antagonistic."

First, he states that unintentional pathways, such as mistakes in design, could lead to malevolent robots, especially when programming technology that is intended to kill people, such as drones.

"They're dangerous by design-they kill people," said Yampolskiy. "And like all machines, they could be vulnerable to hacking, malfunction, or misuse. It could also be something like a smart computer virus. Or a chatbot (one you can talk to online) hacked by criminals to steal identities. In the military, drones capturing visual data can be tapped into."

And aside from lethal consequences, there could also be serious financial consequences or consequences to people's health. He claims that malfunctioning AI could potentially start taking over our resources, seizing political control, revealing sensitive information, and invade citizens' privacy, effectively establishing a surveillance state.

"Wall Street trading, nuclear power plants, social security compensations...are only one serious design flaw away from creating disastrous consequences for millions of people."

But while unintentional design flaws could have devastating implications, Yampolskiy thinks we should be even more concerned about intentional malevolence on the part of the programmers:

"We should not discount the dangers of intelligent systems with semantic or logical errors in coding, or goal alignment problems," said Yampolskiy, "but we should be particularly concerned about systems that are unfriendly by design."

In order to prevent this dystopian future, Yampolskiy believes that we should instate AI ethics boards as well as AI safety committees in the creation of AI. But most importantly, he says, we shouldn't take for granted that secure AI systems can't become unsafe at a later point.

"Few people, even in the AI safety research community, consider dangers of AI designed to be malevolent on purpose," said Yampolskiy. "But it is the biggest danger we face."