Cybersecurity Expert Warns That AI Systems Can Become Malevolent
While big names in science and technology like Stephen Hawking and Elon Musk are telling us to head for the hills when it comes to artificial intelligence, many other theorists are claiming that AI is only ever evil in the movies. Cybersecurity expert Roman Yampolskiy of the University of Louisville, falls in the former camp, as he asserts that most AI researchers are not worried enough about AI systems that could become outright malevolent.
Yampolskiy thinks that generally, AI systems are not necessarily neutral, but "fall in the middle on the spectrum from completely benign to completely evil," clarifying that "evil" in this context can refer to any agenda that is not in alignment with human concern, rather than an agenda that is "explicitly antagonistic."
First, he states that unintentional pathways, such as mistakes in design, could lead to malevolent robots, especially when programming technology that is intended to kill people, such as drones.
And aside from lethal consequences, there could also be serious financial consequences or consequences to people's health. He claims that malfunctioning AI could potentially start taking over our resources, seizing political control, revealing sensitive information, and invade citizens' privacy, effectively establishing a surveillance state.
But while unintentional design flaws could have devastating implications, Yampolskiy thinks we should be even more concerned about intentional malevolence on the part of the programmers:
In order to prevent this dystopian future, Yampolskiy believes that we should instate AI ethics boards as well as AI safety committees in the creation of AI. But most importantly, he says, we shouldn't take for granted that secure AI systems can't become unsafe at a later point.