
- News
- BCI Award 2019 Winner: Decoding speech from dorsal motor cortex
BCI Award 2019 Winner: Decoding speech from dorsal motor cortex
Sergey D. Stavisky from Stanford University, USA and his team detected speech-related neural activity from the same Utah arrays already placed in the dorsal “arm/hand” area of motor cortex of an intracortical BCI clinical trial participant with tetraplegia. This enabled them to study the motor cortical dynamics during speech production at the unprecedented resolution of populations of single neurons, and to prototype classifying between 9 spoken syllables with 84.6% accuracy and between 10 words with 83.5% accuracy. Read the interview with Sergey, who won the 1st place of the BCI Award 2019.
Hi Sergey, you submitted your BCI research “Decoding speech from intracortical multielectrode arrays in dorsal motor cortex” to the BCI Award 2019 and won 1st place. Could you briefly describe what this project was about?
Sergey: “SeWe recorded directly from inside an area of the brain traditionally thought of as controlling arm and hand movements, and we found that the neural signals there also reflected what the person was speaking. This allowed us to prototype ways to identify what the person was saying, which is a first step to building a BCI for restoring speech.”
You are currently viewing a placeholder content from Youtube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
What was your goal?
Sergey: “This was our team’s first foray into speech BCIs, so initially we just wanted to see if we would even see any speech-related neural activity. We already had electrode arrays implanted in the “arm and hand” area of motor cortex of participants in the BrainGate2 BCI pilot clinical trial, and much of our previous work focused on decoding attempted arm movements (for example, see our consortium’s 2018 BCI Award submission). While there had been some incredible recent demonstrations of speech decoding using electrocorticography, those studies using signals from much more ventral brain areas than where our arrays were. I didn’t have high expectations going in: it was a bit of a “let’s take a look and see what we see” if we ask our participants to speak. When we did find speech-related activity, this was both surprising and exciting! From there, we shifted into high gear to try to decode these signals.”
What technologies did you use?
Sergey: “We made these recordings using two Blackrock Microsystems 96-electrode arrays. It’s the only intracortical sensor approved for long-term human use, and it’s allowed us to learn a great deal about the brain and to demonstrate what is possible using implanted BCI. Looking to the future, I’m excited about the possibility of getting even better neural signals using new and improved implanted neural recording devices. In terms of the algorithms, in this initial work we used conventional statistics techniques, but we’re now applying modern deep learning techniques to much larger human speaking datasets.”
What kind of people could benefit from your research?
Sergey: “Showing that we can decode speech-related activity from implanted electrode arrays is a first step towards building BCIs to restore speech. This could make a tremendous difference of people who have lost their ability to speak, for example due to stroke, spinal cord injury, ALS, or vocal tract injury. In doing so, we’re studying speech production with single neuron resolution, which I hope will lead to fundamental scientific discoveries in addition to the more direct translational applications.”
Do you think your work as future potential for clinical use?
Sergey: “I absolutely do, but there’s a long road to get there. First of all, here we showed that we can identify which of a small number of syllables or words was spoken, in isolation. A clinical speech BCI should be capable of synthesizing a full range of continuous speech. Second, we identified sounds that the participants actually spoke out loud. There are additional challenges in building the map from neural activity to speech if the user isn’t able to speak at all. Third, we recorded neural activity using electrodes that have external wires coming out through the scalp. In a clinical system, the sensors need to become fully implanted. Fortunately, there’s a lot of work being done on all of these fronts by many groups, including ours.”
How was it to win the BCI Award 2019?
Sergey: “It was fantastic news! The whole team was delighted. There’s so much effort over many, many months that goes into BCI research (for example, we started this project in Autumn 2017), so it’s really nice to have awards like this that come as a pleasant surprise and recognize the work.”
Decoding speech from intracortical multielectrode arrays in dorsal motor cortex
Sergey D. Stavisky1, Francis R. Willett1, Paymon Rezaii1, Leigh R. Hochberg2, Krishna V. Shenoy1,3, Jaimie M. Henderson1
1 Stanford University, USA.
2 Brown University, Harvard Medical School, Massachusetts General Hospital, Providence VA Medical Center, USA.
3 Howard Hughes Medical Institute, USA.