VEST helps deaf feel, understand speech
11 Apr 2015
A vest that allows the profoundly deaf to ''feel'' and understand speech is under development by engineering students and their mentors at Rice University and Baylor College of Medicine
VEST helps deaf feel, understand speech |
Under the direction of neuroscientist and best-selling author David Eagleman, Rice students are refining a vest with dozens of embedded actuators that vibrate in specific patterns to represent words. The vest responds to input from a phone or tablet app that isolates speech from ambient sound.
Eagleman introduced VEST – Versatile Extra-Sensory Transducer – to the world at a TED Conference talk in March. He is director of the Laboratory for Perception and Action at Baylor College of Medicine and an adjunct assistant professor of electrical and computer engineering at Rice, of which he is also an alumnus.
His lab studies the complex mechanisms of perception through psychophysical, behavioral and computational approaches as well as neuroscience and the law.
The Rice students working on VEST, all electrical and computer engineering majors, call themselves the Eagleman Substitution Project (ESP) team. They include seniors Zihe Huang, Evan Dougal, Eric Kang and Edward Luckett and juniors Abhipray Sahoo and John Yan.
They are aiding Scott Novich, a doctoral student in electrical and computer engineering at Rice who works in Eagleman's lab. Novich devised the algorithm that enables the VEST to ''hear'' only the human voice and screen out distracting sounds.
The low-cost, noninvasive vest collects sounds from a mobile app and converts them into tactile vibration patterns on the user's torso. Haptic feedback supplants auditory input.
The first VEST prototype put together by the team has 24 actuators sewn into the back. A second version, already in production, will include 40 of the actuators Eagleman calls ''vibratory motors.'' He described the experience, at least for a hearing person, as ''feeling the sonic world around me.''
''Along with all the actuators, the system includes a controller board and two batteries,'' said Gary Woods, the team's adviser and a Rice professor in the practice of computer technology. ''The actuators vibrate in a very complicated pattern based on audio fed through a smartphone. The patterns are too complicated to translate consciously.''
With training, the brains of deaf people adapt to the ''translation'' process, Eagleman said. Test subjects, some of them deaf from birth, ''listened'' to spoken words and wrote them on a white board. ''They can start understanding the 'language' of the vest,'' he said.
''We've already run some simple experiments with both hearing and deaf people,'' Novich said. ''As they use the vest more, they get feedback and know whether they are right or wrong and start to memorize patterns. People are able to identify words they have never encountered before.''
The project has also prompted students to learn skills they wouldn't necessarily acquire in engineering classrooms. Huang became the team's tailor when he learned to sew via YouTube. ''I'm an electrical engineer,'' he said. ''I didn't know anything about sewing.'' But the teammates' quick-study abilities have paid dividends already.
Last November, ESP placed second in the sixth-annual Undergraduate Elevator Pitch Competition sponsored by the Oshman Engineering Design Kitchen at Rice. In February, the team placed third in the third-annual Owl Open, the Rice student startup competition sponsored by the Rice Alliance for Technology and Entrepreneurship. The team will also present its work this month at the annual Design of Medical Devices conference in Minnesota.
''We see other applications for what we're calling tactile sensory substitution,'' Sahoo said. ''Information can be sent through the human body. It's not just an augmentative device for the deaf. The VEST could be a general neural input device. You could receive any form of information.''