Future hearing aids could be adjusted by the wearer to remove background noise using new technology that could also be used to clean up and search YouTube videos.
Future hearing aids could be adjusted by the wearer to remove background noise using new technology that could also be used to clean up and search YouTube videos.
We are developing the technology to underpin intelligent hearing devices
Richard Turner
A noisy restaurant, a busy road, a windy day – all situations that can be intensely frustrating for the hearing impaired when trying to pick out speech in a noisy environment. Some 10 million people in the UK suffer from hearing difficulties and, as helpful as hearing aids are, those who wear them often complain that background noise continues to be a problem.
What if hearing device wearers could choose to filter out all the troublesome sounds and focus on the voices they want to hear? Engineer Dr Richard Turner believes that this is fast becoming a possibility. He is developing a system that identifies the corrupting noise and “rubs it out”.
“The poor performance of current hearing devices in noise is a major reason why six million people in the UK who would benefit from a hearing aid do not use them,” he said. Moreover, as the population ages, a greater number of people will be hindered by the inability to hear clearly. In addition, patients fitted with cochlear implants – devices implanted into the brain to help those whose auditory hair cells have died – suffer from similar limitations.
The solution lies in the statistics of sound, as Turner explained: “Many interfering noises are immediately recognisable. Raindrops patter on a surface, a fire crackles, talkers babble at a party and the wind howls. But what makes these so-called auditory textures sound the way they do? No two rain sounds are identical because the precise arrangement of falling water droplets is never repeated. Nonetheless, there must be a statistical similarity in the sounds compared with say the crackle of a fire.
“For this reason, we think the brain groups together different aspects of sounds using prior experience of their characteristic statistical structure. We can model this mathematically using a form of statistical reasoning called Bayesian inference and then develop computer algorithms that mimic what the brain is doing.”
The mathematical system that he and colleagues have developed is capable of being “trained” – a process that uses new methods from the field of machine learning – so that it can recognise sounds. “Rather surprisingly, it seems that a relatively small set of statistics is sufficient to describe a large number of sounds.”
Crucially, the system is capable of telling the difference between speech and audio textures. “What we can now do in an adaptive way is to remove background noise and pass these cleaned up sounds to a listener to improve their perception in a difficult environment,” said Turner, who is working with hearing experts Professor Brian Moore at the Department of Experimental Psychology and Dr Robert Carlyon at the Medical Research Council Cognition and Brain Sciences Unit, with funding from the Engineering and Physical Sciences Research Council.
The idea is that future devices will have several different modes in which they can operate. These might include a mode for travelling in a car or on a train, a mode for environments like a party or a noisy restaurant, a mode for outdoor environments that are windy, and so on. The device might intelligently select an appropriate mode based on the characteristics of the incoming sound. Alternatively, the user could override this and select a processing mode based upon what sorts of noise they wish to erase.
“In a sense we are developing the technology to underpin intelligent hearing devices,” he added. “One possibility would be for users to control their device using an interface on a mobile phone through wireless communication. This would allow users to guide the processing as they wish.”
Turner anticipates a further two years of simulating the effect of modifications that clean up sound before they start to work with device specialists. “If these preliminary tests go well, then we’ll be looking to work with hearing device companies to try to adapt their processing to incorporate these machine learning techniques. If all goes well, we would hope that this technology will be available in consumer devices within 10 years.”
Tinnitus sufferers could also benefit from the technology. Plagued by a constant ringing in the ears, people with tinnitus sometimes use environmental sound generators as a distraction. Such generators offer a limited selection of sounds – a babbling brook, waves lapping, leaves rustling – but, with the new technology, “patients could traverse the entire space of audio textures and figure out where in this enormous spectrum is the best sound for relieving their tinnitus,” added Turner.
The technology not only holds promise for helping the hearing impaired, but it also has the potential to improve mobile phone communication – anyone who has ever tried to hold a conversation with someone phoning from a crowded room will recognise the possible benefits of such a facility.
Moreover, with 100 hours of video now being uploaded to YouTube every minute, Google has recognised the potential for systems that can recognise audio content and is funding part of Turner’s research. “As an example, a YouTube video containing a conversation that takes place by a busyroadside on a windy day could be automatically categorised based on the speech, traffic and wind noises present in the soundtrack, allowing users to search videos for these categories. In addition, the soundtrack could also be made more intelligible by isolating the speech from the noises – one can imagine users being offered the chance to de-noise their video during the upload process.
“We think this new framework will form a foundation of the emerging field of ‘machine hearing’. In the future, machine hearing will be standard in a vast range of applications from hearing devices, which is a market worth £18 billion per annum, to audio searching, and from music processing tasks to augmented reality systems. We believe this research project will kick-start this proliferation.”
For more information, please contact Louise Walsh (lw355@admin.cam.ac.uk).
Inset image: Dr Richard Turner
This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.