Robots and Humans Try to Read Each Other's Minds
The relationship between humans and technology took a strange turn today when two studies were released, one which detailed humans trying to read a robot's mind and the other which detailed the use of advanced technology to read a human's mind.
First, researchers from UC Berkeley used a "brain decoder" that could guess the words a person was thinking based solely on their brainwaves. Similar to the study that allowed pilots to fly planes using brain commands, or the study that allowed for direct brain-to-brain communication, or the study that allowed a paralyzed man to move his hand using only his mind, the researchers used electrodes attached to the brain to record neural activity, and found that specific activities were only associated with hearing certain frequencies. They applied this same concept to thinking certain words, and created personalized decoders for each subject based on their brain activity when hearing different words. These decoders were then able to create spectrographs, or visual representations of the subject's thoughts, which allowed the researchers to reconstruct the words that the subjects were saying silently to themselves.
The study isn't perfect, by any means. First, they only tested seven subjects, all of whom already had electrodes attached to their brains for epilepsy treatment, so their sample was not large enough to draw a real conclusion or statistically random by any stretch of the imagination. The results were significant, but not overwhelming, as they were only able to predict thoughts in one of the subjects at better than chance levels. But the researchers acknowledge that the study had shortcomings and the tech is in its infancy, so it still may have potential to have a wide variety of applications, such as allowing sufferers of full-body paralysis to communicate.
On the flip side, MIT published a study in which researchers used a virtual reality simulation in order to read the minds of robots. Whenever their robot is making a decision, such as which path to take around an obstacle, the program projects its "thoughts" in the form of colored lights. A small dot follows the obstacle, representing the robot's perception of its position in the space, and lines extending across the room in different directions represent its perception of possible paths to take.
"Normally, a robot may make some decision, but you can't quite tell what's going on in its mind-why it's choosing a particular path," said Ali-akbar Agha-mohammadi, a postdoctoral researcher in MIT's Aerospace Controls Lab. The program allows the research team to visualize the robot's "perceptions and understanding of the world."
This program allowed the researchers to understand the inner workings of the robot, which allowed them to see problems in the algorithm more quickly and more easily fix them, but the most fascinating part of this study is the fact that they don't understand the decision-making process in the first place. Weren't they the ones who programmed it? This study either shows that our understanding of our own robotics is more rudimentary than one would think, or that robots are much closer to independent thought and sentience than previously believed. It's probably the former, but we prefer to think the latter.