'Self-Aware' Robot Figures Out What It Looks Like, What It Can Do
According to a press release, engineers at Columbia University have developed a robot that pushes the inevitability of self-aware machines to the next level. The project, which was completed at the Creative Machines Lab, was described in the latest issue of Science Robotics by PhD student Robert Kwiatkowski and mechanical engineering professor Hod Lipson.
The press release reads, in part:
"For the study, Lipson and his PhD student Robert Kwiatkowski used a four-degree-of-freedom articulated robotic arm. Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters. The self-model performed a pick-and-place task in a closed loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100 percent success."
The researchers expressed their own wariness about their findings. "Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control," the engineers said. "It's a powerful technology, but it should be handled with care."