The mouth and tongue are exquisitely complex and sensitive organs, and in more ways than one. Of course the tongue does the tasting, but it and the lips also play another important role in eating by allowing detailed sensing of the shape and texture of food and other objects. It’s a sensitivity so great that we evolved to express intimacy through lip-on-lip contact. Now, an innovation from Colorado State University aims to take advantage of that sensitivity and use it to help the deaf and hearing impaired. With simple electrodes and a detailed understanding of the tongue, this team thinks it can eventually achieve real-time translation from the spoken word to electrical braille on the tongue.
The device aims to be a comprehensive tongue translator in the long term, delivering complex patterns of electrical stimulation, but at present the team is still building a tongue-map robust enough to allow detailed, accurate stimulation in all patients. They want to be able to produce a single device that could either work in all mouths or calibrate itself to any mouth, and to do it they’ll need a better working understanding of how tongues vary in the population. They want to be able to record spoken language and translate it in real-time, then express that translation through a complex electrode array — complex enough to spell out whole words, rather than letters. This would make it more difficult to learn than braille, but also efficient enough to keep up with rapid conversation.
The hope is that users will eventually be able to “hear” this device’s electrical output as naturally as regular auditory information. This is roughly like the claim by some blind people that they can almost see the mental pictures they build from sounds and echoes around them; with enough repetition, the brain can often adapt to new types of stimulation and learn to accommodate them like natural sensing. Whether or not patients can internalize this tongue input to that extent, it could still help them quickly make sense of audio, from spoken language to the sound of a wailing siren, without the need to carry or touch anything.
The biggest and more glaring problem with this idea is that, sensitive though the mouth may be, it’s really not the most efficient way to be hearing things. We live in an era where multiple world-leading technology companies are competing for the right to put a tiny screen in front of your eye 24 hours per day — that seems like a pretty logical inroad to real-time translation for the deaf, and it doesn’t even require them to learn a language other than their native one. With Google and others quickly approaching the threshold of real-time audio translation, the deaf being born today might never know a time in which the world did not come with built-in subtitles displayed in their (Google/Sony/Oakley/McDonalds brand) visor.
On the other hand, constant subtitling would reduce the available screen space for more conventional eye-screen apps, and could never be properly accommodated by the brain to actually replace normal hearing. The mouth has a lot of sensing bandwidth, but most of that potential goes unused most of the time. If the brain can really adapt to interpret these electrodes like cochlear information, then visual subtitles would seem laughably archaic.

This is ambitious work, in that it aims to make certain kinds of translation utterly moot. In reality, there’s no particular reason that this would need to be confined to the deaf; if it works, and the brain can adapt relatively easily, there’s no reason we couldn’t have healthy people wearing these to hear both a Japanese businessman’s real inflection and a tongue-born English translation at the same time. Since the Japanese would be mostly abstract emotional information without sensible words to distract, this might not even be all the disorienting, and could become a natural form of communication over time.
The work is quite simple at its heart; the only reason this couldn’t have been done on the arm is that only the tongue has the available sensory resolution needed to receive data fast enough to keep up with the data-flow of real-time spoken language. Mapping the tongue to exploit this “fat pipe” to the brain could be a remarkably powerful tool in helping everyone, not just the disabled, derive more information about the world around them.