Soon, you could have a mind-reading app on your smartphone!
19 Apr 2017
A mind-reading machine that can translate thoughts into speech is coming closer to reality. Scientists have created a device that reads people's minds through their brainwaves, which could lead to an ''easily-operated'' machine that links up to smartphones in the next five years.
The research has been ongoing for several years, and last year scientists at University of California at Berkley successfully managed to play back a word that someone was thinking by monitoring their brain activity. They were able to reproduce a word a person has just heard on a machine by monitoring temporal lobe activity in a neurosurgical setting.
Among its potentially positive applications, the breakthrough could one day help handicapped people who struggle to speak, such as those who have suffered a stroke, to communicate again.
It could be used as a 'telepathic typewriter' that automatically notes down what we are thinking.
While there still remains a long way to go, they said the research could help victims of stroke and others with speech paralysis.
Using electrodes placed on the surface of the language areas of the brain of patients, they monitored the pattern of electrical responses of brain cells during perceived speech. The scientists then created a computer model that could match spoken sounds to these signals.
The technology, from researchers at Japan's Toyohashi University of Technology, can recognise the numbers 0 to 9 with 90 per cent accuracy using brain waves. Study participants uttered the numbers and the robot guessed in real-time based on real-time readings of an electroencephalogram (EEG) brain scan.
The mind-reading device was also able to recognise 18 types of Japanese symbols from EEG signals with a 60 per cent accuracy rate. This, the researchers say, shows the possibility of an EEG-activated typewriter in the near future.
''Up until now, speech-decoding from EEG signals has had difficulty in collecting enough data to allow the use of powerful algorithms based on "deep learning" or other types of machine learning,'' reads a university statement.
''The research group has developed a different research framework that can achieve high performance with a small training data-set. The group aims to develop a 'Brain Computer Interface' that recognises utterances without voicing, or speech imagery.
''This technology may enable handicapped people, who have lost the ability of voice-communication, to obtain the ability once again.
''Furthermore, the research group plans to develop a device that can be easily operated with fewer electrodes and connected to smartphones within the next five years.''
The team were studying how hearing words, speaking out loud and imagining words involves brain areas that overlap.
''Now, the challenge is to reproduce comprehensible speech from direct brain recordings done while a person imagines a word they would like to say,'' lead author Professor Robert Knight said at the time.
Professor Knight said the goal of the device is to help people affected by motor disease such as paralysis and Lou Gehrig's Disease.
''There are many neurological disorders that limit speech despite patients being fully aware of what they want to say,'' Professor Knight said.
''We want to develop an implantable device that decodes the signals that occur in the brain when we think about a word, then turn these signals into a sound file that can be reproduced by a speech device.''
Remarkably, the team was then able to decode speech when a person thinks of a specific word, from direct brain recordings.
''The new techniques and mathematical processing of the brain signals got us closer to the details we need to extract the signals that are relevant for reproducing speech,'' Professor Knight said.