18 Jul Artificial Intelligence
The brain-computer interface is something that already exists and is used in medicine and in contexts where the body has difficulty moving. Well, at the Massachusetts Institute of Technology and, more precisely, the Computer Science and Artificial Intelligence Laboratory (MIT), they are working on the next step: a robot controlled by thought and maybe even by gestures.
The researches’ guinea pig, if we can call it that, is Baxter, a humanoid robot designed by Rethink Robotics and connected to a human through a series of electrodes placed on his head and on his arms. So, what is the next step? The testing phase allows Boston researchers to correct any execution errors, through brainwaves and hand gestures, making it easier to manage robots and their activities. According to the results of the first tests, the robot was able to improve its accuracy from 70% to 97%.The goal is to make the machine adapt to the communication of human beings, not vice versa. The heart of the invention lies in the ability to detect brain signals known as ErrP, Error-related Potential, which occur when a human becomes aware of an error. The system involves monitoring a person whose job is to supervise the work of a robot. When the person realizes that the robot is making a mistake, the ErrP signal produced by his brain interrupts the action of the robot. The human then produces a gesture with their hand that “tells” the robot what the correct operation is. This step also exploits the muscle activity monitoring.
“Communicating with a robot becomes similar to communicating with another person“, said Joseph DelPreto, head of the research project. In the long term, this approach could also simplify the lives of people with movement or communication problems. For Daniela Rus, also a member of the development team, “solutions like this show that it is possible to develop robotic systems that behave like a more natural and intuitive extension of ourselves“.