Neuroprosthesis greetings change brain signals into words on the screen
Researchers at UC San Francisco have developed a new speech neuroprosthesis designed to allow a man with severe palsy to communicate in full sentences. Neuroprosthesis translates brain signals to the vocal tract directly into words that appear on the screen as text. The progress was developed in collaboration with a participant in a clinical research essay and was based on more than a decade of effort.
The Neurourgeon Edward Chang, MD, says the knowledge of him, this is the first successful demonstration of direct decoding of complete words of the brain activity of a paralyzed person who can not speak. Advance shows the promise to restore communication by touching the natural-speaking machinery of the brain. Losing the ability to speak is, unfortunately, it is not uncommon due to stroke, accident and illness.
Being unable to communicate is a significant detriment to a person’s health and well-being. While most studies in the field of communication, neuroprostides focus on restoring communication using spelling-based approaches that require writing letters one by one in text format, the new study is different . Chang and his team are working on the translation of signals aimed at controlling the muscles of the vocal system to talk about words instead of signals to move an arm or hand to allow writing.
Chang says that the team’s approach has taken taps in the natural and fluid aspects of speech and promises a faster and more organic communication. During speech, people usually communicate at a high rate of up to 150 to 200 words per minute. Nothing that is based on spelling is that the rapid way to make this form of communication considerably slower.
By capturing the signs of the brain and going directly to the words, it is much closer to how we talked normally. Chang has been working to develop his speech neuroprosthesis in the last decade. He progressed towards the goal with the help of patients at the UCSF Epilepsy Center that underwent neurosurgery to identify its seizure regions using electrode matrices placed on the surface of their brains. All those patients had a normal speech and offered themselves as volunteers so that their brain recordings will be analyzed for speech-related activity.
Chang and colleagues from him mapped the cortical activity patterns associated with movements of the vocal tract that produce each consonant and vocal. These findings are translated into the word recognition of words, and methods were developed for real-time decoding of these statistical language patterns and models to improve accuracy. The first patient in the trial was a man in his last 30 years who suffered a stroke of cerebral stem over 15 years ago that he damaged the connection between his brain, his vocal tract and extremities.
As a result, he has extremely limited head, neck and tips movements and communicate using a pointer attached to a baseball cap to put letters on a screen. The patient worked with Chang and his team to form a vocabulary of 50 words that could be recognized from brain activity, which was enough to create hundreds of sentences expressing concepts applicable to user’s life.