It sounds like science fiction. People who are completely paralyzed due to brain injury, brainstem stroke, amyotrophic lateral sclerosis (ALS, Lou Gehrig’s disease), and express words with their mouths can “speak” artificially using only the power of thought.
The novel speech neuroprosthesis (a speech brain-computer interface) artificially articulates building blocks of speech based on high-frequency activity in brain areas – the anterior cingulate and orbitofrontal cortices, and hippocampus – that had never been harnessed for a neuroprosthesis before.
85% accuracy
“The 37-year-old patient in the study is an epilepsy patient who was hospitalized to undergo resection of the epileptic focus in his brain,” explained Tankus. He has intact speech and was implanted with depth electrodes for clinical reasons only. During the first set of trials, the participant made the neuroprosthesis produce the different vowel sounds artificially with 85% accuracy. In the following trials, performance improved consistently. We show that a neuroprosthesis trained on overt speech data can be controlled silently,” Tankus and colleagues wrote.
“To do this, of course, you need to locate the focal point, which is the source of the ‘short’ that sends powerful electrical waves through the brain. This situation involves a smaller subset of epilepsy patients who don’t respond well to medication and require neurosurgical intervention and an even smaller group of epilepsy patients whose suspected focus is located deep within the brain, rather than on the surface of the cortex.”
How it works
In the experiment’s first stage, with the depth electrodes already implanted in the patient’s brain, the team asked him to say two syllables out loud: /a/ and /e/. They recorded the brain activity as he articulated these sounds. Using deep learning and machine learning, the researchers trained artificial intelligence models to identify the specific brain cells whose electrical activity indicated the desire to say /a/ or /e/.
“In this experiment, for the first time in history, we were able to connect the parts of speech to the activity of individual cells from the regions of the brain from which we recorded. This allowed us to differentiate among the electrical signals that represent the sounds /a/ and /e/. At the moment, our research involves two building blocks of speech, two syllables. Of course, our ambition is to get to complete speech, but even two different syllables can enable a fully paralyzed person to signal ‘yes’ and ‘no.’”