Silent speech: Tel Aviv’s thought-powered communication for paralyzed

Health

It sounds like science fiction. People who are completely paralyzed due to brain injury, brainstem stroke, amyotrophic lateral sclerosis (ALS, Lou Gehrig’s disease), and express words with their mouths can “speak” artificially using only the power of thought. 

Loss of speech due to injury or disease is devastating. Now, this scientific breakthrough by researchers from Tel Aviv University (TAU) and Tel Aviv Sourasky Medical Center (TASMC) has shown the potential for speech by a silent person using the power of thought only. In an experiment, such a patient imagined saying one of two syllables. Depth electrodes implanted in his brain transmitted the electrical signals to a computer, which then vocalized the syllables.

The study was led by Dr. Ariel Tankus of TAU’s School of Medical and Health Sciences and the medical center, together with Dr. Ido Strauss of TAU’s School of Medical and Health Sciences and director of the hospital’s functional neurosurgery unit. 

The results of this groundbreaking study have just been published in the prestigious journal Neurosurgery, which is the official publication of the Congress of Neurological Surgeons, under the title “A speech neuroprosthesis in the frontal lobe and hippocampus: decoding high-frequency activity into phonemes.”

Tel Aviv University (credit: Courtesy)

The novel speech neuroprosthesis (a speech brain-computer interface) artificially articulates building blocks of speech based on high-frequency activity in brain areas – the anterior cingulate and orbitofrontal cortices, and hippocampus – that had never been harnessed for a neuroprosthesis before. 

The achievement offers hope for making it possible for people who are completely paralyzed to regain the ability to speak voluntarily.

85% accuracy

“The 37-year-old patient in the study is an epilepsy patient who was hospitalized to undergo resection of the epileptic focus in his brain,” explained Tankus. He has intact speech and was implanted with depth electrodes for clinical reasons only. During the first set of trials, the participant made the neuroprosthesis produce the different vowel sounds artificially with 85% accuracy. In the following trials, performance improved consistently. We show that a neuroprosthesis trained on overt speech data can be controlled silently,” Tankus and colleagues wrote. 

 “To do this, of course, you need to locate the focal point, which is the source of the ‘short’ that sends powerful electrical waves through the brain. This situation involves a smaller subset of epilepsy patients who don’t respond well to medication and require neurosurgical intervention and an even smaller group of epilepsy patients whose suspected focus is located deep within the brain, rather than on the surface of the cortex.”

To identify the exact location, electrodes are implanted into deep structures in their brains. They are then hospitalized, waiting until they suffer from another seizure. When it occurs, the electrodes will tell the neurosurgeons and neurologists where the focus is, thereby allowing them to perform a precise operation. 

From a scientific perspective, this provides a rare opportunity to get a glimpse into the depths of a living human brain, the researchers said. Fortunately, the epilepsy patient hospitalized at TASMC agreed to participate in the experiment that could ultimately help completely paralyzed individuals to express themselves again through artificial speech.”

How it works

In the experiment’s first stage, with the depth electrodes already implanted in the patient’s brain, the team asked him to say two syllables out loud: /a/ and /e/. They recorded the brain activity as he articulated these sounds. Using deep learning and machine learning, the researchers trained artificial intelligence models to identify the specific brain cells whose electrical activity indicated the desire to say /a/ or /e/. 

Once the computer learned to recognize the pattern of electrical activity associated with these two syllables in the patient’s brain, he was asked to only imagine that he was saying /a/ and /e/. The computer then translated the electrical signals and played the pre-recorded sounds of /a/ or /e/ accordingly.

“My field of research deals with the encoding and decoding of speech –how individual brain cells participate in the speech process – the production and hearing of speech, and the imagination of speech, or ‘speaking silently,’” Tankus continued. 

“In this experiment, for the first time in history, we were able to connect the parts of speech to the activity of individual cells from the regions of the brain from which we recorded. This allowed us to differentiate among the electrical signals that represent the sounds /a/ and /e/. At the moment, our research involves two building blocks of speech, two syllables. Of course, our ambition is to get to complete speech, but even two different syllables can enable a fully paralyzed person to signal ‘yes’ and ‘no.’”

They believe that in the future, it will be possible to train a computer for an ALS patient in the early stages of the disease, when they can still speak. The computer would learn to recognize the electrical signals in the patient’s brain, making it possible for the computer to interpret these signals even after the patient loses the ability to move their muscles. “And that is just one example. Our study is a significant step toward developing a brain-computer interface that can replace the brain’s control pathways for speech production, allowing completely paralyzed individuals to communicate voluntarily with their surroundings once again.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *