Researchers are investigating ways in which brain-computer devices could “speak” for us using only brain signals. This technology could be a game-changer for people with speech impediments, or for so-called “locked-in” patients, like some epilepsy patients, who can’t speak at all due to loss of motor function.
Could brain-computer devices potentially decode our thoughts into understandable speech or written words? The idea might sound far-fetched, but scientists already are decoding speech from signals generated with our brains when we speak, or listen to someone speaking. Some sort of mind-reading technology, claim the authors of a review published recently in Frontiers in Human Neuroscience, might be about to emerge from the realm of science fiction.
“So, instead of saying ‘Siri, what is the weather like today?’ or ‘OK, Google, where can I go for lunch?’ I just imagine saying these things,” Christian Herff, co-author said in a press release.
Herff and co-author Tanja Schultz wrote “Automatic Speech Recognition from Neural Signals: A Focused Review,” to present the findings of their project and review the state-of-art technology currently being investigated.
Herff and Schultz compared the advantages and the disadvantages of using different brain imaging techniques. They compared functional MRIs, near infrared imaging, which can actually detect neural signals based on metabolic activity of neurons, EEGs and magnetoencephalography (MEG), which can detect electromagnetic activity of neurons responding to speech. One method in particular, electrocorticography (ECoG), showed particular promise in capturing neural signals from the brain.
Some epilepsy patients, who already had electrode grids implanted for treatment, participated in the brain-to-text-introducing study. They were presented with content to read while the researchers recorded their brain activity. These recordings formed the grounds of a database with patterns of brain signals that then could be matched to speech elements, or “phones.” Language and dictionary models also were included in these algorithms, which allowed scientists to decode neural signals to text with significant accuracy.
“For the first time, we could show that brain activity can be decoded specifically enough to use ASR [automatic speech recognition] technology on brain signals. However, the current need for implanted electrodes renders it far from usable in day-to-day life. A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that,” Herff said.