New Delhi: Researchers at the University of California, United States, have successfully tested an experimental brain implant that translates brain signals into words on a computer screen.
The implant was tested on a 36-year-old man, who successfully used it to communicate words and sentences in English, 15 years after he had suffered a stroke that caused his speech to be impaired. In addition to anarthria (loss of ability to speak), the man also suffers from spastic quadriparesis (a subset of spastic cerebral palsy that affects all four limbs).
Although the study was published in the New England Journal of Medicine Wednesday, the experiment had begun in February 2019, when the researchers implanted a subdural, high-density multielectrode in the 36-year-old’s brain (he was 36 in 2019), over the part of the brain that controls speech.
The study was carried out in 50 sessions over 81 weeks, during which he was asked to form both isolated words and complete sentences. The words consisted of common English words and phrases, including those that he would regularly use to communicate with his caregivers — before the transplant he communicated using an assistive computer-based typing interface, controlled by his head movement.
The study was partly funded by Facebook’s Sponsored Academic Research Agreement, and on Wednesday, the social media platform celebrated the achievement with a blog post.
“Today we’re excited to celebrate the next chapter of this work and a new milestone that the UCSF team has achieved and published in The New England Journal of Medicine: the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting speech. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the muscles that control the vocal tract — a milestone in neuroscience,” the blog explained.
Also read: Indian study learns how brain handles sudden distractions, could help create mental health tools
Decoding speech attempt
The researchers began collecting data in April 2019, when they used a digital-signal processing system to acquire signals from the implanted device and transmit them to a computer with a custom-built software for real-time analysis. Using this process, brain signals were translated into words that could be read on a screen.
In each of the trial tasks, words were shown to the man, which he had to then attempt to speak.
The researchers created speech detection and word classification models and used deep learning techniques to make predictions from his neural activity. Deep-learning technique is machine-learning, based on artificial neural networks.
“The speech-detection model processed each time point of neural activity during a task and detected onsets and offsets of word-production attempts in real-time,” the study explained.
“For each attempt that was detected, the word-classification model predicted a set of word probabilities by processing the neural activity spanning from one second before to three seconds after the detected onset of attempted speech. The predicted probability associated with each word in the 50-word set quantified how likely it was that the participant was attempting to say that word during the detected attempt.”
The researchers successfully decoded sentences made by the participant at the rate of 15.2 words per minute, with a median error rate of 26.6 per cent.
Ninety-eight per cent of the participant’s attempts to form individual words were successfully detected and researchers decoded them with a 47.1 per cent accuracy.
(Edited by Poulomi Banerjee)
Also read: Covid can cause loss of brain grey matter in recovered patients, Oxford researchers find