
Enhanced Performance of Non-Invasive Brain-Computer Interfaces with AI
Participants controlled a robotic arm using only their thoughts, thanks to AI and an electrode-studded cap.
The Neural Engineering and Computation Lab at UCLA has demonstrated a promising direction by combining AI with less invasive brain-computer interfaces. Companies like Neuralink and Precision Neuroscience are developing cutting-edge brain implants primarily for medical applications. However, there is hope that this technology could eventually enhance cognition, allow thought control of technology, and enable merging with AI.
Implanting these devices involves risky brain surgery and potential immune reactions that can degrade performance or necessitate removal. While such risks can be justified for treating serious disabilities or diseases, they pose a complex issue for healthy individuals.
Less invasive brain interfaces that record electrical signals from outside the skull exist but are typically less accurate. UCLA researchers have shown that combining these devices with an AI copilot can significantly boost performance and allow control of a robotic arm.
The researchers used a cap with 64 electrodes to capture EEG signals and developed a custom algorithm to decode them. This system was tested on four participants, including one paralyzed individual.
The first task involved moving a cursor on a computer screen to hover over eight targets for at least half a second. Using reinforcement learning, the AI inferred the user's target and assisted in steering the cursor in the correct direction.
The study found that the AI copilot doubled the success rate of healthy participants and quadrupled it for the paralyzed participant. The researchers demonstrated that this 'shared autonomy' approach significantly enhances the performance of non-invasive technology.