- BCI Award 2017 Winner: Decoding Inner Speech: Utopia or Reality?
BCI Award 2017 Winner: Decoding Inner Speech: Utopia or Reality?
Stephanie Martin studied Life Sciences and Technology at EPFL (École polytechnique fédérale de Lausanne), where she’s been developing her skills in mathematics, physics, programming and everything that is related to the human being, including biology and biochemistry. For her master’s program, she chose Neuroscience with a special focus on neuroprosthetics and neuroengineering. During her PhD, she dedicated her research to decoding inner speech. She submitted her work to the BCI Award 2017 and won the 2nd prize. We had the chance to talk with Stephanie about her PhD research and her opinion about the BCI Award.
Stephanie, how did you come up with the idea to decode inner speech?
Stephanie Martin: “In the second year of the master’s program, I had the chance to go abroad. I went to UC Berkeley in California to join Robert Knight’s Cognitive Neuroscience Lab where I could practice what I had learnt at the University so far. This is when I started to work on a speech decoding project. During this year abroad, I recorded brain activity of epilepsy patients who had electrodes implanted on the brain by a neurosurgeon and had to monitor their brain in order to localize seizures and epileptic foci. During the recordings, we did some experiments and language tasks with the patients. Afterwards, I analyzed the recorded ECoG data.”
And then you came back to EPFL?
Stephanie Martin: “Yes. I really liked the master’s project at Robert Knight’s lab and I felt like I wasn’t done with it. So I decided to join the Brain-Computer Interface Lab here at EPFL for my PhD with the intention to continue collaborating with Bob Knight in Berkeley because I wanted to get the cognitive and neurological aspects as well as the engineering skills from Jose Millan here at EPFL, who was my professor and supervisor.”
You submitted your PhD project and results to the BCI Award 2017. Could you explain your PhD research a little bit?
Stephanie Martin: “During my PhD, I continued to work on my speech decoding project, so I had to come up with new task designs with the patients and record more data. The goal was to decode inner speech and to provide an assistive technology to people who can’t talk or communicate. So I was wondering if it is possible to directly decode the neural activity that was inner speech in a more natural way than other assistive technologies such as BCI, which usually allows you to control a spelling device or to move a cursor on the screen to pick one letter. I wanted to provide an alternative way of communication. It was a very difficult project because inner speech is very hard to monitor. You can’t say precisely what and when people think. For instance, if I think “I am hungry”, then the questions are: When did I start to think that? When did I finish thinking it? Its difficult to label the data. So, my task was to design experiments that allow me to label data or to know what and when people are thinking, and to come up with the best algorithm to extract the information I needed. That was quite a challenge that I had to face during my PhD.”
Did you manage to decode inner speech?
Stephanie Martin: “Yes. There were several different aspects I tried to investigate during my PhD because inner speech is still very unknown. Your “thinking” can be abstract. It can be a representation or it can be your own voice in your head. I tried to analyze different speech representations that are encoded also when you have your inner speech active. For instance, when you speak out loud, you can hear the sound or you have phonetic decompositions. You have words and semantics. And we know how those aspects are encoded. I investigated if those aspects are also encoded during inner speech, and if so, whether I can identify if a person is thinking one word or another.”
What does this mean for Brain-Computer Interface technology?
Stephanie Martin: “If it is possible to decode naturally whether a person is thinking “Yes” or “No” or “Hungry” or “Pain”, a few clinically relevant words, then this would be the next step for future BCI. We showed that we could predict what word a person is producing internally in his/her mind. That’s why we submitted my PhD research to the BCI Award 2017.”
Photo: These three images show an ECoG grid and surgical placement. Brain surgeons sometimes need to very precisely monitor activity over different brain areas, which requires placing ECoG electrodes on the surface of the brain. (a) ECoG surgical placement. (b) Radiography of electrode placement. (c) Electrode positions in situ.
How do you imagine future BCI in terms of language?
Stephanie Martin: “Currently, all the results we showed were obtained from people with healthy language function and have been analyzed offline. We recorded epilepsy patients in the hospital, then we got the data, then we analyzed the data, and finally, we saw the results. The next step is to go online – to replicate these results in real time, decode brain activity and thinking from people with language disabilities (e.g. aphasic patients), and then speak it out loud as a proof of concept. But I think we are very far away from this, I would say it’s unrealistic in the near future. For instance, the results we showed only classified one word versus another word, a “yes” versus a “no”. But there are still many steps to improve the accuracy with BCI. Now, it’s really difficult to extract the information, because of the signal-to-noise ratio or the electrode location. The questions remain: How can we improve the accuracy and move to a more realistic speech device?”
What were the biggest challenges or the most successful moments for you?
Stephanie Martin: “Well, during the PhD you have always your ups and downs. Research is like a rollercoaster. I think it’s a big challenge to explore inner speech because, unlike when you speak out loud or when you hear speech, you can know exactly when the brain responded and mark the data. And in addition, the brain activity is much stronger. You have beautiful data and brain activity. With inner speech, everything becomes more blurred because separating signal from noise is much harder. If I say the word “Hungry” ten times, then you have speech irregularities because brain activity is never going to be the same. In addition, you don’t know when the person starts to think, the onset, the offset, so it becomes difficult to extract the information. You know more or less that inner speech happens at a certain moment, but you don’t know when the different sounds happen. That is a bit the frustrating part. There is still the question: How do you come up with algorithm that deals with these specific issues, how do you design tasks that can exploit the maximum capacity of this problem? And then, I think that’s the PhD life, you want to find solutions. That’s when I tried to adapt classical algorithms to the specific problem. At the end, when we got the results, it was a moment of euphoria, it was really exciting to see the outcome of so much work.”
Photo: Time course of changes in brain activity. Brain activity is averaged across trials and z-scored with respect to the pre-auditory stimuli baseline condition (500 ms interval) for different electrodes. The top plot displays the different conditions for the word repetition task (L = listening, I= inner speech, O = overt speech), an example of averaged time course for a representative electrode and the averaged audio envelope (red line).
What do you think about the international BCI Award?
Stephanie Martin: “Originally, I was thinking that my research is a bad candidate for the BCI Award because I thought it has to be a closed-loop Brain-Computer Interface research, something that works, something that shows an interface. After discussing this with my supervisor Jose Millan, who suggested that my speech decoding research was a good candidate, even if there is not a BCI per se included, I submitted for the 2017 BCI Award. I thought, my project might be relevant for the field of BCI, it could open doors to new BCI applications. The BCI Award is open to any projects that are not necessarily BCIs per se, but relevant for the field. I won the 2nd Prize of the BCI Award 2017, which was really encouraging. And in the future, I think I will move to a more closed-loop BCI to increase our chances and maybe win the 1st place. The BCI Award is a top prize in the BCI field and renowned internationally. I was happy to include it in my CV and hope to add another BCI Award someday!”