Published: Sat, April 27, 2019
Medicine | By Brett Sutton

Device converts brain signals into speech, offering hope for patients

Device converts brain signals into speech, offering hope for patients

Last Updated: April 24, 2019.

To get a better grasp on how the technology might help people with communication disabilities, the researchers repeated the experiment.

"Very few of us have any idea of what's going in our mouths when we speak", said Edward Chang, the lead study author. "This study provides a proof of principle that this is possible".

Researchers conducted the study with a handful of volunteers who already had temporary electrodes implanted in their brains in preparation for neurosurgery to treat epilepsy.

The breakthrough offers hope for people suffering from stroke, brain injury, or neurodegenerative illness such as Parkinson's, multiple sclerosis and amyotrophic lateral sclerosis, the condition that killed Professor Stephen Hawking.

Another promising discovery is that the neural code for vocal movements isn't necessarily unique to every individual. This was then controlled by the volunteer's brain activity and instructed a synthesizer to generate speech. "We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract".

This is far from the first computer-based effort to recapitulate speech-but previous efforts, some of which relied on the reading of facial movements or painstakingly typed out words letter by letter, maxed out at a rate of about eight words per minute.

Scientists in the United States have found a way to generate synthetic speech by decoding brain activity.

Based on the audio recordings of participants' voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

The sentences used in the study were simple declarative statements, including "Ship building is a most fascinating process" and "Those thieves stole thirty jewels".

Gopala Anumanchipalli, the co-author of the study, said, "We used sentences that are particularly geared towards covering all of the phonetic contexts of the English language".

The scientists wrote: "Listeners were able to transcribe synthesised speech well".

The researchers had each patient speak or mime in full sentences, then they constructed maps on how the brain directs the entire vocal system to make sounds. Professor Stephen Hawking is among those who may have benefited from the new device now being developed for these people. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with ideal accuracy.

The research is designed around the problem of restoring speech function to patients who have experienced neurological disorders that render them unable to speech.

But when given a list of 50 words to choose from, they accurately identified only 47% of words correctly and understood just 21% of synthesized sentences accurately.

Josh Chartier, a bioengineering graduate student in the Chang lab, said: "We still have a ways to go to perfectly mimic spoken language. We've got to make it more natural, more intelligible", Chang said.

Still, he added, "The levels of accuracy we produced here would be an wonderful improvement in real-time communication compared to what's now available".

The researchers are now experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further.

Like this: