DARPA Seeks to Make AI Robots More Moral and Less Potentially Psychopathic By Teaching Them Manners

Monday, 28 January 2019 - 10:57AM
Military Tech
Robotics
Monday, 28 January 2019 - 10:57AM
DARPA Seeks to Make AI Robots More Moral and Less Potentially Psychopathic By Teaching Them Manners
< >

In the original 1979 Alien, the crew of the Nostromo is effectively sabotaged by its science officer, Ash, a Hyperdyne Systems 120-A/2 android programmed to ensure that Weyland-Yutani obtains a living xenomorph specimen, regardless of the cost to human life in doing so. In doing so, Ash deceives his human colleagues, violates security protocol (disobeying a direct order), and later tries to murder Ellen Ripley, thus leaving her with a profound distrust of androids and AI systems when she wakes up 57 years later having dispatched both Ash and the xenomorph. Unlike 2001: A Space Odyssey's Hal-9000 before it and the various Terminator models from the eponymous film series that followed, what makes Ash so astoundingly compelling as an antagonist is because he's completely psychopathic in the way that only humans can be. Hal and the Terminator are recognizable as machines, but Ash doesn't show his milky innards until he is thoroughly dismantled and even then, he retains his brutal iciness. Because he looks human, we expect him to possess at least some of the "better angels of our nature," but he doesn't.


It's likely that the brainiacs at DARPA have taken note of the critical eye cast on Artificial Intelligence in popular culture. The agency is working on projects that make AI robots more human: better able to fit into society. One project in particular stands out.


In a 2017 news release, DARPA disclosed that it had contracted a group of researchers at Tufts and Brown University to decode how humans learn behavioral norms and act according to environmental and social contexts. The purpose of this project was to develop a method by which AI systems could be taught to assess behavioral norms, thus helping them better assimilate into human society and contexts. "The goal of this research effort was to understand and formalize human normative systems and how they guide human behavior," said DARPA program manager Reza Ghanadan, "so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans." 


To address those needs, DARPA reported, "The team was able to create a cognitive-computational model of human norms in a representation that can be coded into machines, and developed a machine-learning algorithm that allows machines to learn norms in unfamiliar situations drawing on human data." In short, these researchers were able to create a basic formula for normative human behavior that works across various environmental and situational contexts. To better understand this, you have to consider the way an algorithm works. Despite the terror that the term inspires in social media managers, an algorithm is really just a set of rules or guidelines that describe how to perform a task: think of it as a huge flowchart filled with "If.... then...." statements.


In their published findings, the researchers noted that there are rules that people intuit and generally abide by: certain behaviors that are prescribed or prohibited according to the situation, e.g. sitting in a public library and whispering, which also indicates a prohibition against talking loudly. In order to figure out how to teach an AI how to quickly assess these rules and act, the researchers developed expressions – formulas – to define norms, then used human participants to first generate norms in an array of contexts and then to detect norms across these various contexts. Finally, the researchers considered how people learn norms to begin with, taking into account contextual clues that an AI might consider upon entering a new scenario. "Using a data representation format that incorporates several properties of human norm representation and learning," the team writes, "we then developed a novel algorithm for automatically learning context-sensitive norms from the human data." Whether or not DARPA has successfully implemented this algorithm into AI programming has yet to be seen. 


It is worth noting that the team was led by Dr. Bertram Malle of Brown University's Department of Cognitive, Linguistic, and Psychological Sciences. Besides a formidable body of research into cognitive and behavioral science, Dr. Malle's CV notes a particular interest in human-robot interactions, particularly "people's expectations of future robots; psychological mechanisms triggered by robot appearance (e.g., visual perspective taking); attempts to implement social-cognitive and moral competence in robots; conditions of optimal human-robot interaction (e.g., trust, explainability)." 


As anthropomorphic technology becomes increasingly embedded in our lives – consider the fact that we no longer think anything of talking to proto-AIs like Siri and Alexa and we're quickly becoming used to the idea of humanoid robots – so do the inherent tensions. Forbes reported last year that robots have made humans all but obsolete for manufacturing jobs that once seemed like the last outpost for unskilled labor. Even iconoclasts like Elon Musk betray a certain phobia about AI that borders on superstition: in 2014, the SpaceX and Tesla founder likened AI to "summoning the demon," which seems positively medieval in outlook, especially for a man who sent his car into orbit.


In a 2015 editorial, Dr. Malle discussed how we might develop moral robots and AI systems. In doing so, he illuminated some of the darkest corners of human behavior: the attitudes arising from fear that, as demonstrated time and time again, lead to actions that are illogical at their best and totally destructive at their worst. The real threat, Malle submits, is that robots will learn these fears from humans. 

 

"Perhaps the greatest threat from robots comes from the greatest weakness of humans: hatred and conflict between groups. By and large, humans are cooperative and benevolent toward those whom they consider part of their group, but they can become malevolent and ruthless toward those outside their group. If robots learn such hostile sentiments and discriminatory actions, they may very well become a threat to humanity - or at least a threat to groups that the robot counts as 'outside' its community.  


As recent experiments have shown, that threat is very real indeed. We would do well to take care that we instill these beings with the "better angels of our nature," lest Elon Musk's grim, medieval prophecy prove true. 


Science
Artificial Intelligence
Military Tech
Robotics
Adapted from images on Pixabay
No