Brain Implant Decodes Internal Speech with Password Mechanism
A new brain implant has made strides in decoding internal thoughts, but it requires users to think of a specific password for it to work effectively. This intriguing brain-computer interface (BCI) can decipher about 74% of imagined sentences, but it only engages when the user mentally focuses on a predetermined keyword, helping to safeguard private thoughts.
The study detailing this achievement was published on August 14 in Cell. According to Sarah Wandelt, a neural engineer at the Feinstein Institutes for Medical Research, this work marks a significant technical enhancement in BCI technology for accurately translating internal dialogue. She noted that using a password mechanism greatly contributes to protecting users’ privacy, which is essential for these devices in practical applications.
Preventing Unwanted Eavesdropping
BCIs are designed to convert brain signals into text or audio, and they hold promise for restoring speech to individuals with paralysis or limited motor functions. Many current devices require the user to verbalize thoughts, which can be tiring. Previously, Wandelt and her team created the first BCI capable of decoding internal speech, focusing on signals from the supramarginal gyrus, a crucial area for language processing.
However, there’s a concern that these internal-speech BCIs might accidentally interpret thoughts the users prefer to keep private, said Erin Kunz, a neural engineer at Stanford University and co-author of the study. “We wanted to investigate this further,” she added.
Initially, Kunz and her team analyzed brain signals from microelectrodes implanted in the motor cortex—the area responsible for voluntary movements—of four participants, all of whom had speech difficulties, one from a stroke and the others from motor neuron disease. Participants were asked to either pronounce certain words or to visualize themselves saying them.
The researchers found that both attempts to speak and internal speech generated signals from the same brain area, though the signals linked to internal speech were noticeably weaker.
Next, using this data, the team trained AI models to identify phonemes, the basic building blocks of speech, from the neural data. They employed language models to combine these phonemes, forming words and sentences in real-time from a vocabulary of 125,000 words.
The device managed to accurately interpret 74% of sentences imagined by two participants, who were prompted to think of specific phrases. This achievement reflects a level of accuracy comparable to previous BCI work focused on attempting to speak, according to Kunz.





